title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
SymmetricDiffusers: Learning Discrete Diffusion on Finite Symmetric Groups
Reject
Summary: I am unable to review this paper as it lies outside my area of expertise. Strengths: I am unable to review this paper as it lies outside my area of expertise. Weaknesses: I am unable to review this paper as it lies outside my area of expertise. Technical Quality: 3 Clarity: 3 Questions for Authors: I am unable to review this paper as it lies outside my area of expertise. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I am unable to review this paper as it lies outside my area of expertise. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
null
Summary: The authors aim to create a discrete diffusion model that generates permutations. This model can then be used to solve combinatorial problems including jigsaws and travelling salesman problems. To formulate their model they cover a range of forward shuffling strategies and discuss how to parametrize the reverse transition. During sampling, they also use beam search to find high probability samples. They find their method performs competitively on computational experiments. Strengths: The idea of using riffle shuffles to create a corruption process over permutations and then parametrizing the time reversal of this process is novel and interesting. I enjoyed reading the paper. I believe further work will build on this as there are many instances in machine learning where needing to learn permutations crops up. The paper is quite well written and easy to understand. It is not overloaded with mathematical equations and intuition is given for some concepts. The experimental results seem promising as it performs on par or better (especially in high dimensions) than previous methods for learning permutations. I appreciate the ablation studies into the greedy search vs beam search and types of shuffle (in the appendix). Weaknesses: I think some further clarification is required for the weaknesses of the random transposition and random insertion style of card shuffling. You later say that you can merge steps of the forward process if each individual step does not induce enough mixing and so the stated weakness that these styles of shuffle have slow mixing seems moot. You mention that you do not have access to $q(X_t | X_0)$, and I think it should also be discussed that $q(X_{t-1} | X_t, X_0)$ is also unavailable since this distribution is used in standard diffusion models to re-write the variational bound in a lower variance form, see Appendix A in https://arxiv.org/pdf/2006.11239 . You dedicate a lot of space to discussing the various forward noising processes with different shuffling methods, which is quite interesting. However, the ablations with these different styles of shuffle are in the appendix and I think it should be in the main since they have been given such prominence earlier on in the discussion of the method, it is strange they are not included in the main experiments. I find it difficult to follow the description of the inverse transposition parametrization, there is no intuition given for the functions $\phi$ and $\psi$ nor the functional form of $p_{IT}(\sigma)$. Perhaps this is due to space limitations but since inverse transposition is not in the main experiments (see above point), I think you should either relegate a lot of this to the appendix if you only use the riffle shuffle in practice, or try and shift the wording to properly explain these types of forward and inverse process and have experiments for them in the main. Technical Quality: 3 Clarity: 3 Questions for Authors: When you compute the loss function in equation (10), do you do T calls to the neural network because you calculate the full variational bound for every iteration? This would indeed be very memory intensive, isn't there an alternative where although you need to run the full forward process since $q(X_t | X_0)$ is intractable, you could still only compute the loss on some subset of the $p_\theta(X_{t-1} | X_t)$ reverse transitions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do a good job of discussing the limitations of various parametrizations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below. > **Q1:** You later say that you can merge steps of the forward process if each individual step does not induce enough mixing and so the stated weakness that these styles of shuffle have slow mixing seems moot. > **A1:** We’d like to clarify that we can only merge steps in the reverse process and ***not in the forward process***. While merging steps in the reverse process makes loss computation more efficient, we still need to run the whole forward process for random transpositions and random insertions. And since the mixing time for random transpositions and random insertions are much slower than riffle shuffles, riffle shuffles should be the preferred shuffling method. > **Q2:** I think it should also be discussed that $q(X_{t-1}|X_t,X_0)$ is also unavailable since this distribution is used in standard diffusion models to re-write the variational bound in a lower variance form... > **A2**: Yes, the posterior $q(X_{t-1}|X_t,X_0)$ is unavailable for most shuffling methods, so we cannot re-write the ELBO in a lower variance form like in common diffusion models. It is worth noting that even though $q(X_{t-1}|X_t,X_0)$ is available for riffle shuffles, the common parameterization presented in diffusion models is still not helpful. Please refer to the General Replies for a detailed discussion. > **Q3:** However, the ablations with these different styles of shuffle are in the appendix and I think it should be in the main … > **A3:** We appreciate this suggestion and will reorganize the paper's structure to include the ablation. > **Q4:** I find it difficult to follow the description of the inverse transposition parametrization, there is no intuition given for the functions $\phi$ and $\psi$ nor the functional form of $p_{\text{IT}}(\sigma)$... > **A4:** We apologize for the confusion regarding the inverse transposition parameterization. Due to the space limit, we indeed shortened the description. We will reorganize the sections and include the following intuitions. The support of the IT distribution is the set of all transpositions plus the identity permutation. Let us treat the identity permutation $\mathrm{Id}$ separately, and we use a parameter $\tau$ to assign $p_{\mathrm{IT}}(\mathrm{Id}) = 1 - \mathrm{sigmoid}(\tau)$. Then we assign probabilities to the transpositions. A transposition is essentially an *unordered* pair of *distinct* indices, so we use $n$ parameters $\mathbf{s}=(s_1,\ldots,s_n)$ to represent the logits of each index getting picked. Thus, for indices $i\neq j$, it is natural to let $$ p_{\mathrm{IT}}(\sigma) = \mathrm{sigmoid}(\tau)\left( \frac{\exp(s_i)}{\sum_{k=1}^{n}\exp(s_k)}\cdot\frac{\exp{(s_j)}}{\sum_{k\neq i}\exp(s_k)} + \frac{\exp(s_j)}{\sum_{k=1}^{n}\exp(s_k)}\cdot\frac{\exp{(s_i)}}{\sum_{k\neq j}\exp(s_k)} \right) $$ for a transposition $\sigma=\begin{pmatrix}i&j\end{pmatrix}\neq\mathrm{Id}$, where $\mathrm{sigmoid}(\tau)$ is the probability of not choosing $\mathrm{Id}$. The expression in the parentheses is the probability of choosing the unordered pair $i$ and $j$, which is equal to the probability of choosing $i$ and then $j$, plus the probability of choosing $j$ and then $i$. > **Q5:** Isn't there an alternative where although you need to run the full forward process since $q\left(X_t | X_0\right)$ is intractable, you could still only compute the loss on some subset of the reverse transitions… > **A5:** Our framework allows computing the loss on some subset of the trajectory to be more memory efficient. For example, we could randomly sample one timestep from a denoising schedule $[t_0,\ldots,t_k]$ and compute the loss as $$ \mathbb{E}\_{p_{\text{data}}(X_0,\mathcal{X})} \mathbb{E}\_{i}\mathbb{E}\_{q(X_{t_{i-1}}|X_0)} \mathbb{E}\_{q(X_{t_{i}}|X_{t_{i-1}})} \Big[-\log p_{\theta}(X_{t_{i-1}}|X_{t_i})\Big], $$ omitting constant terms with respect to $\theta$. In fact, for riffle shuffles, we could sample $X_{t_{i-1}}$ directly for arbitrary timestep $t_{i-1}$ from $X_0$ [1]. For other shuffling methods, we would have to run the entire forward process. It is also worth noting that computing the loss on a subset of the trajectory could potentially introduce more variance during training. So, there is a tradeoff, and the exact design choice should depend on the problem setup. In our current implementations of the experiments, we make $T$ (parallelized) calls to the neural network to compute the expectation w.r.t. time in the ELBO exactly. This is because, as discussed in our paper, the forward trajectories are usually short for riffle shuffles, so the exact computing of the variational bound is feasible and can help reduce variance. [1] Dave Bayer and Persi Diaconis. Trailing the dovetail shuffle to its lair. *The Annals of Applied Probability*, 2(2):294 – 313, 1992. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for the clear rebuttal, most of my questions have been cleared up, I especially like the extra intuition on the inverse transposition method and think it would be good to include in the paper. Regarding the mixing of the simpler shuffling styles, I still don't fully agree with slow mixing times being the direct reason for preferring riffle shuffles. If you have a slow mixing forward process you can always increase T until your process has fully mixed. I think the reason to prefer riffle in your framework is precisely because you have to compute the full VLB for every iteration. This is not standard in the diffusion framework hence why it is strange to use slow mixing as a disadvantage of these corruption styles. I don't think this is that important and more a phrasing issue. I would have phrased the narrative more: shuffling corruptions require us to evaluate the full VLB because quantities that are usually analytic are not longer analytic -> T cannot be too big -> we need fast mixing forward processes. Some discussion then of why you don't use the subset of VLB idea might also be good to include since the need to evaluate the full VLB is somewhat limiting (it limits you to fast mixing forward processes). I have raised my score to 7. --- Reply to Comment 1.1.1: Title: Re Response Comment: Thanks for your appreciation and valuable feedback! We agree with your logic and will rephrase our narratives accordingly. Additionally, a fast-mixing forward process is preferred from the computational perspective since the corresponding reverse process permits fast sampling. More discussion on this point will be included in the later version.
Summary: This paper proposes a discrete diffusion model to learn distribution over the finite symmetric group $S_n$. The forward process is built off of random walks on finite groups (in this case, card shuffles), and the paper learns to reverse this diffusion process with standard discrete diffusion arguments. Strengths: Overall, I really like the paper's contributions and presentation. * The idea is a neat application of the discrete diffusion ideas to an important area. In particular, the structure of $S_n$ is sufficiently different from standard image/text datasets as to necessitate this paper. * The presentation is very good and the contributions are numerous. * For the experiments listed, the method seems to provide a very strong improvement over baseline methods. In particular, these other methods are based on fundamentally different technology, so this highlights that discrete diffusion can become a very promising direction here. Weaknesses: There are three primary weaknesses. These should all be addressable to some degree, and I'll take any response into consideration when recalibrating my final score. 1. The model proposes to directly learn the reverse transition densities $p_\theta(X_{t - 1}, X_t)$. The issue with doing this for standard diffusion models is that this seems to hurt model training since it increases the variance of training (as, in particular, one must sample the two $X_{t - 1}, X_t$ for training instead of just one $X_t$). As such, most works use the (ultimately equivalent) mean/score-parameterizations [1, 2]. I would want to hear a bit more about if this would be applicable in the $S_n$ case (and training with this parameterization might improve the model) or if this is not possible. 2. (Related to the above). Since most modern discrete diffusion methods are formulated in continuous time, I think the paper would benefit greatly with a discussion about potentially extending the current methods to this realm. In particular, works like [3, 4, 5] have established a working theory for discrete diffusion in continuous time, so it would be beneficial to discuss how the proposed framework might fit into the established theory. 3. The experiments, while showing good results, do not show that the method is particularly scalable, which seems to be a fundamental problem in prior work that was explicitly mentioned in this paper. In particular, it seems that the maximum value of $n$ in $S_n$ is 100. While some discussion is made here that talks about transformer layers, I think large values of $n$ aren't that big of an issue in transformers due to systems like Flash Attention. So, it should be made more clear if this is a fundamental problem with the existing method, or a larger scale example (even toy) should be presented. [1] https://arxiv.org/abs/2006.11239 [2] https://arxiv.org/abs/2011.13456 [3] https://arxiv.org/abs/2205.14987 [4] https://arxiv.org/abs/2211.16750 [5] https://arxiv.org/abs/2310.16834 Technical Quality: 3 Clarity: 4 Questions for Authors: N/A beyond addressing the weaknesses above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and constructive comments. We appreciate your positive feedback and address the questions below. > **Q1:** The model proposes to directly learn the reverse transition densities $p_{\theta}(X_{t-1}|X_t)$. The issue with doing this for standard diffusion models is that this seems to hurt model training since it increases the variance of training (as, in particular, one must sample the two $X_{t-1}$, $X_t$ for training instead of just one $X_t$). As such, most works use the (ultimately equivalent) mean/score-parameterizations [1, 2]. I would want to hear a bit more about if this would be applicable in the $S_n$ case (and training with this parameterization might improve the model) or if this is not possible. > **A1:** We agree that learning $p_{\theta}(X_{t-1}|X_t)$ directly increases the variance of training. However, the reason why previous works can use mean/score-parameterizations is that they use Gaussian kernels on continuous data, which admits analytical forms for $q(X_{t-1}|X_t,X_0)$ and $q(X_t|X_0)$. However, such parameterizations are not applicable to $S_n$ since the transitions are not Gaussian. In particular, the posterior $q(X_{t-1}|X_t,X_0)$ is unavailable for most shuffling methods. We cannot rewrite the ELBO in a lower variance form like in standard diffusion models. It is worth noting that even though $q(X_{t-1}|X_t,X_0)$ is available for riffle shuffles, the common parameterization presented in diffusion models is still not helpful. Please refer to the General Replies for a detailed discussion. > **Q2:** In particular, works like [3, 4, 5] have established a working theory for discrete diffusion in continuous time, so it would be beneficial to discuss how the proposed framework might fit into the established theory. > **A2:** Thanks for bringing up this point and mentioning the interesting work. Since the shuffling methods are inherently discrete-time Markov chains, it seems non-trivial to convert them into continuous-time Markov processes. In other words, the commonly used linear ODE parameterization in [3, 4] does not match the shuffling dynamics considered in our work. On the other hand, the concrete score and the score entropy idea in [5] seem to be compatible, i.e., we could possibly change the parameterization to develop the discrete-time counterpart in our context. We will explore this topic more and add the discussion in the later version. > **Q3:** The experiments, while showing good results, do not show that the method is particularly scalable… While some discussion is made here that talks about transformer layers, I think large values of $n$ aren't that big of an issue in transformers due to systems like Flash Attention. So, it should be made more clear if this is a fundamental problem with the existing method, or a larger scale example (even toy) should be presented. > **A3:** We agree that techniques like Flash Attention could improve our model’s scalability. A large value of $n$ is indeed not a big issue since our backbone network is also a Transformer. During submission, since some metrics (e.g., accuracy) drop significantly as $n$ increases and other competitive methods do not scale up well, we only report the results with $n\le100$ for a better comparison. During the rebuttal, we further investigated our model for the task of sorting 4-digit MNIST numbers with $n=200$ and show the results below. | Sequence Length | Kendall-Tau | Accuracy (%) | Correct (%) | | --- | --- | --- | --- | | 200 | 0.317 | 0 | 39.4 | This result is already better than the performance of sorting network based models on the easier task with $n=100$ (Diffsort/Error-free Diffsort: Kendall-Tau=0.166/0.140, Correct (%)=23.2/20.1). We will include more large-scale experiments in the later version. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for answering my questions. I am raising my score contingent on the fact that the authors will work in the discussion that they proposed above. --- Reply to Comment 1.1.1: Title: Re Response Comment: Thank you for your appreciation and valuable feedback! We will include the discussion in the later version.
Summary: This paper introduces SymmetricDiffusers, a new approach to learning complex distributions. It works by breaking down the problem into simpler steps: learning how to reverse a transformation using deep neural networks. The authors identify a particularly effective method for this reversal step (the riffle shuffle) and provide guidance on choosing the right length for the process based on mathematical properties. Additionally, they propose a more powerful alternative to a common distribution (the generalized Plackett-Luce distribution) and a theoretically sound strategy for improving efficiency (the denoising schedule). Experiments show that SymmetricDiffusers performs extremely well on various tasks, including sorting images, solving puzzles, and optimizing routes. Strengths: The proposed method in general is interesting and seems effective in the examples studied in this paper. Also, the problem studied is interesting. Weaknesses: My main concern is the proposed method is not the state-of-the-art. A clear approach is to learn invariant features and equivariant group actions using some existing methods like that proposed in Robin Winter, et al. Unsupervised Learning of Group Invariant and Equivariant Representations, NeurIPS 2022. Such an approach. In Robin Winter et al.'s paper, the authors have studied the symmetric group and my understanding is disentangling invariant features and equivariant groups enables the design of flexible diffusion models since you only need to perform diffusion modeling in the invariant latent space. I want to see the comparison of the approach proposed in Robin Winter et al's paper. My understanding is that Robin Winter's approach enables building state-of-the-art diffusion models for molecular generation. Technical Quality: 1 Clarity: 3 Questions for Authors: My main concern is the proposed method is not the state-of-the-art. A clear approach is to learn invariant features and equivariant group actions using some existing methods like that proposed in Robin Winter, et al. Unsupervised Learning of Group Invariant and Equivariant Representations, NeurIPS 2022. Such an approach. In Robin Winter et al.'s paper, the authors have studied the symmetric group and my understanding is disentangling invariant features and equivariant groups enables the design of flexible diffusion models since you only need to perform diffusion modeling in the invariant latent space. I want to see the comparison of the approach proposed in Robin Winter et al's paper. My understanding is that Robin Winter's approach enables building state-of-the-art diffusion models for molecular generation. Confidence: 5 Soundness: 1 Presentation: 3 Contribution: 1 Limitations: My main concern is the proposed method is not the state-of-the-art. A clear approach is to learn invariant features and equivariant group actions using some existing methods like that proposed in Robin Winter, et al. Unsupervised Learning of Group Invariant and Equivariant Representations, NeurIPS 2022. Such an approach. In Robin Winter et al.'s paper, the authors have studied the symmetric group and my understanding is disentangling invariant features and equivariant groups enables the design of flexible diffusion models since you only need to perform diffusion modeling in the invariant latent space. I want to see the comparison of the approach proposed in Robin Winter et al's paper. My understanding is that Robin Winter's approach enables building state-of-the-art diffusion models for molecular generation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1:** In Robin Winter et al.'s paper, the authors have studied the symmetric group and my understanding is disentangling invariant features and equivariant groups enables the design of flexible diffusion models since you only need to perform diffusion modeling in the invariant latent space. > **A1:** Thanks for introducing Winter’s interesting work [1]. Specifically, [1] employs a graph-invariant encoder, $\eta$ , ensuring that regardless of the group action $\rho_X$ applied from group $G$ on a discrete state $x \in X$ , the encoder's output remains constant ($\eta(\rho_X(g) x) = \eta(x), \forall x \in X, \forall g \in G$ ). This invariant characteristic is particularly beneficial for molecule generation [2], where the goal is to learn a probability distribution over molecules that is $\mathrm{SE}(3)$-invariant. However, we’d like to clarify that **performing diffusion modeling in the invariant latent space can not solve the problem in our paper**. In particular, our goal is to learn a distribution over a finite symmetric group $S_n$, where it is critical that representations of different elements within the group are distinct. For instance, since distinct permutations result in different images in Jigsaw Puzzles, an invariant encoder can not differentiate between incorrect and correct permutations based on the latent representations. Therefore, a group-invariant encoder does not solve our problem. On the other hand, one could use the group-equivariant encoder $\psi$ proposed in [1], which predicts group actions in one shot instead of performing diffusion. However, the resulting model would be essentially similar to the one in Gumbel-Sinkhorn [3], which has been shown to be less effective than ours in several tasks. Therefore, the approach in [1] does not align with the objectives of our study. We will discuss this work in a future version. [1] Winter, Robin, et al. "Unsupervised learning of group invariant and equivariant representations." *Advances in Neural Information Processing Systems* 35 (2022): 31942-31956. [2] Xu, Minkai, et al. "Geometric latent diffusion models for 3d molecule generation." *International Conference on Machine Learning*. PMLR, 2023. [3] Mena, Gonzalo, et al. "Learning latent permutations with gumbel-sinkhorn networks." *arXiv preprint arXiv:1802.08665* (2018). > **Q2:** My main concern is the proposed method is not the state-of-the-art… I want to see the comparison of the approach proposed in Robin Winter et al's paper. My understanding is that Robin Winter's approach enables building state-of-the-art diffusion models for molecular generation. > **A2:** To the best of our knowledge, our method achieves state-of-the-art performances in the tasks of sorting 4-digit MNIST images and jigsaw puzzles. It is important to note that our focus is not on developing diffusion models for molecular generation. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I thank the authors' response. However, I respectfully disagree with the authors' statement that the goal of Winter et al's work is to learn a probability distribution over molecules that is SE(3)-invariant. Winter et al's work is quite generic, and **they have discussed the symmetric group**. The experiments conducted by Winter et al. are far beyond SE(3)-equivariance. --- Reply to Comment 1.1.1: Title: Reply to the response Comment: As we have explained, although Winter et al.'s work [1] has discussed the symmetric group, **1) it focuses on representation learning instead of building probability distributions over symmetric groups,** and **2) it can not directly solve our problem**. We will elaborate on these two points below. 1) Their experiment for the symmetric group aims to learn representations to reconstruct a set of digits represented by a sequence of $D$-dimensional digits with length $N$. We aim to learn distributions over symmetric groups to solve jigsaw puzzles, sorting, and traveling salesman problems, which require assigning different probabilities to different group elements. 2) As aforementioned, their group-invariant encoder $\eta$ produces the same representation for different group elements. Thus, it can not be used to construct a distribution that assigns different probabilities to different group elements. Furthermore, their group-equivariant encoder $\psi$ predicts group actions in one shot instead of performing diffusion. This encoder alone is essentially the same as the one in Gumbel-Sinkhorn [3], which is less effective than ours in several tasks. In summary, their experiments completely differ from what we have done, and their work can not solve our problem directly. We will discuss the connection in a future version. PS: Our original statement is that *the goal of molecule generation [2] is to learn a probability distribution over molecules that is SE(3)-invariant*. We did not comment on the goal of Winter et al’s work [1]. --- Rebuttal 2: Title: Reviewer UKH2 seems incorrect Comment: I noticed this thread was developing, and I feel obligated to say something. Reviewer UKH2's comments seem to highly mischaracterize the contributions of Winter et al. to the point of being largely unfair and invalid. In particular, 1. [1] does not build a diffusion model anywhere in the paper, counter to claims made in the original comment. In fact, [1] is not even a probabilistic model at all. The work is effectively a equivariant AE (that operates by instead mapping to $X/G \times G$ instead of $X$ in the mathematical framework). Note that, as a strict autoencoder, it is not even probabilistically meaningful, which means that we aren't learning a distribution on $X$ anyways. 2. [1] does not seem to be SOTA. In particular, the work is **qualitative** and shows the performance of the embeddings in, for example, visualizing data. The work is not **quantitative** and does not present any major numerical results. 3. [1] is primarily focused on building group **equivariant** models. In particular, I emphasize that, instead of modeling a distribution over the group $G$, one would model a distributions over a group $X$ that $G$ acts on (ie $X = \mathbb{R}^{n \times 3}$ and $G = SO(3) \times S_n$ would be a typical setup for molecules) **if** [1] was a proper VAE instead of an AE. This is **extremely** different from the proposed work, which models a distribution on $G$. 4. [2] is completely different from [1], is more relevant to the proposed work, and, even then, would not be applicable here. [2] learns a proper probabilistic model (ie a VAE + diffusion), which is completely different from the goal of [1]. [2] would not be applicable here either since we would be looking to model a distribution over $X$ that is equivariant with respect to $G$. If $X = S_n$, then there is no such $G$ besides the trivial group which would give the required flexibility. Such a framework would then have to continuously embed $|S_n| = n!$ number of elements into the latent space and decode there, which is computationally infeasible since even the most performant VQVAEs (ie discrete quantized representation VAEs, trained through some variant of straight through estimator) can have a codebook size of 1024. [1] Winter, Robin, et al. "Unsupervised learning of group invariant and equivariant representations." Advances in Neural Information Processing Systems 35 (2022): 31942-31956. [2] Xu, Minkai, et al. "Geometric latent diffusion models for 3d molecule generation." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 2.1: Title: Thanks for the support from Reviewer 1BDf Comment: We thank Reviewer 1BDf for further clarifying the contributions of Winter et al. [1] and for pointing out the unfairness of Reviewer UKH2's comments. The listed points strongly support our explanation that (1) the method in [1] cannot solve the problem we address, and (2) the experiments in [1] are fundamentally different from ours. Therefore, it is unjust for Reviewer UKH2 to criticize our work for not being state-of-the-art compared to [1] and to request further comparison. [1] Winter, Robin, et al. "Unsupervised learning of group invariant and equivariant representations." Advances in Neural Information Processing Systems 35 (2022): 31942-31956. --- Rebuttal Comment 2.2: Title: I respectfully disagree Comment: The key idea of [2] - latent diffusion models for molecule generation - builds on the idea in [1], as pointed out in [2]. It seems [1] is not directly related to this paper, but the approaches proposed in [1] are quite generic. You can perform symmetric diffusion similar to GeoLDM in [2] by parameterizing the encoder and densoising diffusion using an S(n) equivariant neural network. This approach is much more viable compared to this paper in terms of performance, efficiency, and flexibility. --- Rebuttal 3: Title: Stepping in again. Comment: This is, unfortunately, the second time in this thread that I feel the need to step in. To reviewer UKH2: * **Please clarify your prior position and argument.** Both the authors and I have stated explicitly that the prior work that you reference does not work in the problem setting of the submitted paper. In particular, the prior work you reference can enable one to learn a **$\mathbf{S_n}$-equivariant probabilistic model** while the submitted work learns a **probabilistic model on $\mathbf{S_n}$**, which are two very different things and do not overlap in any meaningful way. For example, this means the submitted paper is *not* actually building a "symmetric diffusion". For clarity sake, **I am asking you to directly address which part of this assertion you think is incorrect** (e.g. if the statement does not accurately portray your references, the submitted work, or the differences/similarities between the two bolded ideas). * **Please make more concrete points.** In particular, please do not reference course projects or intuitions about methods or anything else like this that we (authors, reviewers, ACs) are unable to directly access. For example, if UKH2 has played with the provided code in this paper and has tried a baseline method that they have mentioned (which could explain the motivation behind your comments), it would better support your point if you can provide us all with the code through an anonymous github link or something similar. * **Please review in good faith.** Please do not accuse the authors (or me) of not reading your references. Please do not lower your score to spite the authors when they disagree with you. Please do not insult the submitted work in a dismissive way (e.g. "weak" and "not worth publishing" vs saying something like "does not adequately compare against relevant baselines", which also happens to be a more precise characterization of your critique). --- Rebuttal 4: Title: Your (reviewer 1BDf) position and scientific aspects get dubious Comment: I think my point on how to solve the problem by parameterizing the encoder and denoising process is clear, and the implementation is also clear - replacing E(n)-equivariant neural networks in GeoLDM with S(n)-equivariant neural networks used by Winter et al. The implementation is not challenging at all, and it is not my responsibility to provide code for this. As a reviewer, my role is to properly assess the paper based on my knowledge, though I cannot guarantee completely adequate. I lowered the score because I think the paper has major flaws after discussing it with the authors. Let us recall how you violate good faith: recall that I first raise my comments on another solution to solve the problem, and look at how insulting and lack of scientific merits in your first comment. Then I made my point clear on how to solve the problem in detail, then looked at your insulting language in your latest comment. I think your position is dubious and lacks good faith. My points are more than clear, **before the authors showed me reasonable evidence that the proposed approach is better than using the latent diffusion, I insist on rejecting this paper.** I am not going to respond to any of your further unscientific comments. --- Rebuttal Comment 4.1: Comment: Dear Reviewers and Authors, Thank you all for your engagement in the review process for this submission. I appreciate the time and effort that has gone into providing detailed feedback and responses. However, I am concerned that the discussion has become unproductive and moved away from a constructive scientific dialogue. At this point, I believe it would be best to refocus our attention on the core scientific merits of the paper. To that end: 1. I kindly ask all parties to refrain from personal accusations or comments about others' motivations or competence. Let's maintain professionalism and focus on the content of the work. 2. Reviewer UKH2: I appreciate your perspective on potential alternative approaches. However, for the purposes of this review, we need to evaluate the paper based on its own merits and comparisons to existing published work, rather than hypothetical unpublished methods. If you have specific concerns about the paper's methodology or results, please articulate them clearly in relation to the content of the submission. 3. Authors: Thank you for your detailed responses. Moving forward, please continue to address scientific points raised by the reviewers to the best of your ability. 4. Reviewer 1BDf: I appreciate your efforts to clarify points of contention. Please continue to focus on the scientific aspects of the submission in any further comments. 5. To all: If there are any remaining substantive scientific points that need clarification regarding the submission, please state them concisely. Otherwise, I will proceed with making a recommendation based on the reviews and discussion thus far. Let's please maintain a collegial and constructive tone as we conclude this review process. I will be carefully considering all feedback provided as I formulate my final recommendation. Thank you for your cooperation and dedication to the peer review process. Best regards, Area Chair
Rebuttal 1: Rebuttal: ## General Replies We thank the reviewers for their insightful and constructive comments. We have addressed all the questions in the individual responses. Here, we'd like to highlight a common question from the reviewers: **Q: Why can’t we use $q(X_{t-1}|X_t,X_0)$ and the KL divergence form of the variational bound? (Q1 for Reviewer **1BDf**; Q2 for Reviewer **fFZa**)** **A:** In our work, we optimize the following negative ELBO: $$ \mathbb{E}\_{p_{\text{data}}(X_0, \mathcal{X})q(X_{1:T} \vert X_0, \mathcal{X})} \left[-\log p(X_T \vert \mathcal{X}) - \sum_{t=1}^{T}\log\frac{p_{\theta}(X_{t-1}|X_t)}{q(X_t|X_{t-1})}\right] \tag{1} $$ Many diffusion models will write the negative ELBO in the following equivalent form of KL divergences to reduce the variance [2, 3]: $$ \mathbb{E}\_{p_{\text{data}}(X_0,\mathcal{X})q(X_{1:T}|X_0)}\left[D_{\mathrm{KL}}(q(X_t|X_0)\parallel p(X_T|\mathcal{X}))+\sum_{t>1}D_{\mathrm{KL}}(q(X_{t-1}|X_t,X_0)\parallel p_{\theta}(X_{t-1}|X_t))-\log p_{\theta}(X_0|X_1)\right] \tag{2} $$ However, we cannot use this objective for $S_n$ in most cases. In particular, since $$ q(X_{t-1}|X_t,X_0)=\frac{q(X_{t}|X_{t-1})q(X_{t-1}|X_0)}{q(X_t|X_0)}, $$ we can derive the analytical form of $q(X_{t-1}|X_t,X_0)$ if we know the form of $q(X_t|X_0)$. **However, $q(X_t|X_0)$ is unavailable for most shuffling methods used in the forward process except for riffle shuffles.** For riffle shuffles, $q(X_t|X_0)$ is actually available and permits efficient sampling [1]. However, $D_{\mathrm{KL}}(q(X_{t-1}|X_t,X_0)\parallel p_{\theta}(X_{t-1}|X_t))$ does not have an analytical form, unlike in common diffusion models. As a result, we cannot use mean/score parameterization [2,4] commonly employed in the continuous setting. One can rewrite the term as follows and resort to the Monte Carlo (MC) estimation, $$ \mathbb{E}\_{q(X_t|X_0)}\Big[ D_{\mathrm{KL}}(q(X_{t-1}|X_t,X_0)\parallel p_{\theta}(X_{t-1}|X_t)) \Big] = \mathbb{E}\_{q(X_t|X_0)} \mathbb{E}\_{q(X_{t-1}|X_0)} \left[\frac{q(X_t|X_{t-1})}{q(X_t|X_0)}\cdot\log\frac{q(X_{t-1}|X_t,X_0)}{p_{\theta}(X_{t-1}|X_t)}\right]. $$ Note that $X_t \sim q(X_t|X_0)$ and $X_{t-1}\sim q(X_{t-1}|X_0)$ are drawn *independently*. However, there is a high chance that $q(X_t|X_{t-1})=0$ for the $X_t$ and $X_{t-1}$ that are sampled. Consequently, if we only draw a few MC samples, the resulting estimator will likely be zero with zero-valued gradients, impeding the optimization of the training objective. Our preliminary experiments also verified that the above MC version of the objective leads to slightly poorer empirical performance. | Training Loss (w. Forward Riffle Shuffle) | Sequence Length | Kendall-Tau | Accuracy (%) | Correct (%) | | --- | --- | --- | --- | --- | | Eq. (1) (loss in our paper) | 15 | 0.932 | 82.6 | 94.5 | | Eq. (2) | 15 | 0.926 | 80.7 | 94.2 | [1] Dave Bayer and Persi Diaconis. Trailing the dovetail shuffle to its lair. *The Annals of Applied Probability*, 2(2):294 – 313, 1992. [2] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." *Advances in neural information processing systems* 33 (2020): 6840-6851. [3] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." *Advances in Neural Information Processing Systems* 34 (2021): 17981-17993. [4] Song, Yang, et al. "Score-Based Generative Modeling through Stochastic Differential Equations." *International Conference on Learning Representations*.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Accept (poster)
Summary: This paper concerns the overfitting behaviour of Gaussian kernel ridge regression (KRR) with varying bandwidth or dimensionality. The contribution is two-fold: 1. In fixed-dimension, the ridgeless solution of Gaussian KRR is not consistent with any varying or tuned bandwidth. 2. In high dimension with input dimension $d=2^{2^l}$, sample size $m=2^{2^{2^l}}$ for any arbitrary integer $l$, the overfitting of Gaussian KRR is benign. All the results are under the Gaussian universality Ansatz and the (non-rigorous) risk predictions in terms of the kernel eigenstructure introduced in [Simon2023]. Reference: - James B. Simon, Madeline Dickens, Dhruva Karkada, and Michael R. DeWeese. The eigenlearning framework: A conservation law perspective on kernel regression and wide neural networks. arXiv preprint arXiv:2110.03922, 2021. Strengths: The two contributions are novel. Weaknesses: However, I have serious concerns about the assumptions made in this paper. Firstly, the eigenframework under the Gaussian Universality Ansatz (GUA) is non-rigorous and actually fails to explain the catastrophic overfitting of NTK in fixed dimensions, as reported in \cite{Barzilai2023} and \cite{Cheng2024}. I am uncertain whether benign overfitting would still hold in high-dimensional settings. This paper does not offer experimental validation either, so I cannot ensure the correctness of the claim for Gaussian kernels (not its proxy under GUA). Secondly, the catastrophic overfitting of Gaussian kernels with fixed bandwidth is well-known. It is not difficult to imagine that Gaussian KRR with varying bandwidth remains inconsistent. While the first contribution of this paper is novel, I question whether it is significant enough for paper acceptance, especially given my concerns regarding the second contribution. Reference: - Barzilai, Daniel, and Ohad Shamir. "Generalization in kernel regression under realistic assumptions." arXiv preprint arXiv:2312.15995 (2023). - Cheng, Tin Sum, et al. "Characterizing overfitting in kernel ridgeless regression through the eigenspectrum." arXiv preprint arXiv:2402.01297 (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: The most important question would be if the authors could provide some experimental validations on these two claims. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: All assumptions are stated in the claims. However, as mentioned before, GUA and Eigenframework assumptions are too unrealistic and might provide wrong results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Below we address the main comments: 1. Regarding the wrong predictions of the Gaussian Universality Ansatz (GUA): There is ample empirical evidence that the eigenframework holds well for the Gaussian kernel. For example, Figure 1 in [38] shows the predicted vs. true test risk for Gaussian KRR (with data from MNIST) and Figure 5 in [7] shows the true vs. predicted risk for Gaussian KRR with data uniform on the sphere (which is the setup we consider here). In both of these plots, the prediction matches closely with the true test risk. Furthermore, there are many works that rely on the eigenframework, such as [1, 7, 12, 25, 47, 54, 63]. Therefore, this limitation holds for the entire line of work and not just for the current paper. We agree with the reviewer that understanding the limitations of this framework is an important future research direction. We will update the paper to highlight this as a limitation. 2. Experimental results confirming the validity of our claims: In the attached PDF, we plotted the dependence of the test error on sample size for the Gaussian kernel ridgeless (interpolating) predictor. In Figure a, we consider the first case of Theorem 1, namely when the bandwidth $\tau_m = o(m^{-\frac{1}{d-1}})$. It shows that the test error tends to something equal to or larger than the test risk of the null predictor. In Figure b, we consider the second case, i.e $\tau_m = \omega(m^{-\frac{1}{d-1}})$ and the plot shows that the test error increases with sample size, which aligns with the claim that it diverges as the sample size $m$ increases. In Figure c, we consider the case when $\tau_m = \Theta(m^{-\frac{1}{d-1}})$ and the plot shows that for some noise level (which is high enough), the predictor is worse than the null predictor, as predicted by our theorem. We will add these simulations to the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your prompt response and the experimental results. We can now be confident that the Gaussian kernels behave as predicted by the theory. However, for a theoretical result, I would expect a more realistic assumption to explain the Gaussian kernel interpolation. As mentioned by other reviewers, the major technique was developed in earlier works, and the results presented in this paper alone do not justify acceptance. I believe one possible direction could be to drop the GUA assumption in your analysis. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their further feedback. We believe that the discussion about the significance of the theoretical results is important for all the reviewers to see, so we are providing a global reply to this discussion.
Summary: This paper discusses the generalization ability of Gaussian kernel interpolation, an interesting topic. However, the authors sidestep the most challenging part by assuming the 'Gaussian universality ansatz,' which significantly simplifies the problem. With this assumption, their work essentially consists of elementary computations. Furthermore, their argument heavily relies on the earlier work by Lijia Zhou, James B. Simon, Gal Vardi, and Nathan Srebro, titled 'An agnostic view on the cost of overfitting in 473 (kernel) ridge regression,' an arXiv preprint with the identifier arXiv:2306.13185, published in 2023." Strengths: A detailed calculation of the eigenvalue structure of the Gaussian kernel with various bandwidths has been performed. This could be valuable for future studies. Weaknesses: No significant contribution was made to the kernel interpolation problem itself. The major workhorse was developed in earlier work by Lijia Zhou, James B. Simon, Gal Vardi, and Nathan Srebro, as detailed in their paper titled 'An agnostic view on the cost of overfitting in 473 (kernel) ridge regression,' published as an arXiv preprint with the identifier arXiv:2306.13185 in 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: Perhaps due to my lack of knowledge, could the author elaborate on their contributions beyond the detailed computation of the Gaussian kernel with varying bandwidths? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Below we address the main comments: 1. The goal of the paper is to study the kernel interpolation problem by understanding its behavior under common settings. It achieves this goal in two ways. First, we consider the common Gaussian kernel with varying bandwidth. In lines 82-86, we argue that this is an important setting since it achieves optimal rates of convergence for a large set of target functions (note that the optimal rates are achieved by varying the bandwidth). Indeed, using a varying bandwidth is a standard practice when using the Gaussian kernel. We are the first to consider whether the interpolating solution exhibits tempered or catastrophic behavior in this setting and how it compares to the null predictor. This result shows that even with varying bandwidth, the interpolating solution is almost always worse than the null predictor. Second, we consider increasing dimensions and sample size with more general kernels. We establish upper and lower bounds on the test risk for any scaling of the dimension and sample size, which indicates what type of overfitting we get. This result not only agrees with the known case of polynomial scaling of the dimension but also allows us to show the first case of sub-polynomially increasing dimension that achieves benign overfitting for the Gaussian kernel. Our results provide a more comprehensive analysis of overfitting in kernel interpolation. 2. The results are highly non-trivial at the technical level and do not simply use Zhou et al 2023 [63] (as the reviewer suggested). First, for fixed dimension, our technical contribution is deriving the dependence of the eigenvalues of the Gaussian kernel on the bandwidth (Theorem 8, line 1133) and hence understanding how the test risk depends on the bandwidth. Second, for increasing dimension, our technical contribution is deriving upper and lower bounds for the test risk that work for any scaling of the dimension and sample size (Theorems 2 and 3, lines 189 and 203), by introducing the “upper and lower index” (Definition 1, line 182). --- Rebuttal 2: Comment: In kernel regression, there are two quantities that characterize the generalization ability of the kernel estimator (and for interpolation, there might be only one such quantity). These quantities are in turn determined by the eigenvalues of the kernel. When considering the Gaussian kernel, there is an explicit formula for the eigenvalues in terms of the bandwidth h. Thus, if h varies, the two quantities will vary as well. Since the author has further adopted the Gaussian ansatz, I assume the major contribution is characterizing how the eigenvalues and these two quantities vary with changes in h. Could you please correct me if I have overlooked any other key contributions? --- Rebuttal Comment 2.1: Comment: We thank the reviewer for additional comments. Indeed, for the interpolating solution of the Gaussian KRR, there is one parameter that can be varied, the bandwidth $\tau_m$. Note that the bandwidth can (and in practice often does) depend on sample size $m$. Our first technical contribution, which is understanding how the eigenvalues of the Gaussian kernel for the sphere depend on the bandwidth $\tau _ m$, holds without the Gaussian design ansatz assumption. The second technical contribution is understanding how the test risk of the interpolating solution of Gaussian KRR depends on the bandwidth and thus characterizing its overfitting behavior, under the Gaussian design ansatz. Note that both of these are highly nontrivial technical contributions. Our third technical contribution is for the case of increasing dimension, understanding how the test risk behaves for any scaling of the dimension and sample size. We achieve this by showing upper and lower bounds on the test risk of the interpolating solution of KRR. Our fourth technical contribution is using these bounds for increasing dimension to demonstrate that we can achieve benign overfitting even with sub-polynomially increasing dimension and Gaussian KRR, improving the currently known result for polynomially increasing dimension. This is also a nontrivial technical contribution.
Summary: This paper looks at the problem of kernel (ridgeless) regression with Gaussian kernel and data on a sphere. The paper studies two cases, first we fix the dimension $d$ and are allowed to send the number of data points $m \to \infty$ and the width of the kernel $\tau_m \to 0$, second, we are allowed to scale $d$. For both problems they study how a quantity known as the predicted risk scales. *It is important to note that the paper assumes that this is a good estimate of the true excess risk, which is not immediately evident and I think requires more justification (see weaknesses).* In the first case they show that if $\tau_m$ goes to zero too quickly, we have tempered overfitting for the *preditcted risk* and that it has a worse risk than the null estimator. If it goes to 0 too slowly, then we have catastrophic overfitting for the *predicted risk*, and that if it goes to zero at the appropriate rate, we have tempered overfitting again and that we may outperform the null estimator. The paper also presents some results about the case when $d$ scales as well. Strengths: The paper uses the predicted risk as an estimate of the true risk. The prdicted risk is given by $$ \tilde{R}(\tilde{f}\_{\delta}) = \mathcal{E}\_\delta \left(\sum\_{i=1}^\infty (1-\mathcal{L}\_{i,\delta})^2 \beta_i^2 + \sigma^2 \right) $$ Where $\beta_i$ are the coefficients of the target function in the eigenbasis of the kernel function and $\mathcal{E}, \mathcal{L}$ are coefficients based on eigenvalues and $\sigma^2$ is the noise variance. *If we assume that this is a good estimate of the excess risk*. Then to understand the risk, we have reduced the problem to understanding how $\mathcal{E}, \mathcal{L}$ **and** $\beta_i$ scale. **This paper studies how $\mathcal{E}_0$ scales by providing some bounds on the eigenvalues of the Gaussian kernel function. This I think is the primary contribution of the paper.** However, there are some issues, see weaknesses. Weaknesses: There are a few weaknesses. The current reject score is due to my concerns with the proofs. 1) **Theorem 1** I have some concerns with the results in Theorem 1. Theorem 1 states that unless we meet some very specific conditions, the null predictor (i.e. the zero function) outperforms kernel ridge regression. This seems quite surprising to me. Taking a closer look at the proof, we see that proof is broken down into two Lemmas. One proves the upper bounds, the other lower bounds. However, Lemma 1, only deals with $\mathcal{E}_0$. However, I think something has to be said about $\sum\_{i=1}^\infty (1-\mathcal{L}\_{i,\delta})^2 \beta_i^2$. If I understand correctly, when we change $m$, we change the width, which changes the kernel function. However, since $\lambda_i$ and $\phi_i$ are the dependent on $K$. Hence $\beta_i$ which are the coefficients of $f$ in this eigenbasis are dependent on $m$. Hence $\sum\_{i=1}^\infty (1-\mathcal{L}\_{i,\delta})^2 \beta_i^2$ has a subtle dependence on $m$ which I think needs to be accounted for. Because apriori it is not clear to me that this is bounded in $m$. That is, $\sum_i \beta_i^2(m) = S(m)$ could be unbounded in $m$. I did not go through the lower bound proofs as carefully, but I do think the the dependence of $\beta_i$ on $m$ is addresses. Please correct me if I am wrong. Hence I think either another assumption needs to be added (and if it is added a discussion of its reasonableness would be appreciated) or that terms needs to be explicity dealt with. 2) Eigenlearding Framework and Predicted Risk In general, I would prefer results showing that the predicted risk is a good estimate of the true risk. The paper on lines 116-131 attempt to this. However, the primary papers that have been cited [1,12,25] are all about **linear** regression and not **kernel** regression. Specifically, they all assume that $y_i = \beta^T x_i + \epsilon_i$ and that we are fitting a linear function. This is not quite the setting of this paper, as we have an infinite dimensional feature vector space that we are working in. Hence I do not think those papers are sufficient theoretical justification. My other concern with this, is that the discussion from lines 98-111 **do not assume that** $y = f(x) + \epsilon_i$ which I think is needed in [47]. The paper presents for general distributiuons on $\mathcal{X}$ and $\mathcal{Y}$. I do not think the eigenlearning framework captures the following situation $x_i = z_i + \epsilon_j$ and $y_i = f(z_i)$, but the paper here, as written, would claim this. See [A,B] for papers that address such a situation, [B] even has results on benign overfitting. [A] Sonthalia, Rishi and Raj Rao Nadakuditi. “Training Data Size Induced Double Descent For Denoising Feedforward Neural Networks and the Role of Training Noise.” Trans. Mach. Learn. Res. 2023 (2023): n. pag. [B] Kausik, C., Srivastava, K., & Sonthalia, R. (2023). Double Descent and Overfitting under Noisy Inputs and Distribution Shift for Linear Denoisers.TMLR 2024 3) References I think the long list of references on lines 45-48 need to be changed. The paper should either further subcategorize them, or cite fewer papers. In the case that the authors decide to further subvategorize them. I think they are missing some references [A,B,C,D]. Additionally, some papers are cited twice, see 25, 26, and 23, 24. [C] Xiao, L., Hu, H., Misiakiewicz, T., Lu, Y., & Pennington, J. (2022). Precise Learning Curves and Higher-Order Scalings for Dot-product Kernel Regression. Neural Information Processing Systems. [D] Hu, H., Lu, Y.M., & Misiakiewicz, T. (2024). Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime. ArXiv, abs/2403.08160. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see weaknesses Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations are well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Below we address the main comments: 1. Regarding the claim about Theorem 1: $\beta_i$ in this case do not depend on $m$. Changing $m$ does change the bandwidth $\tau_m$, but the eigenfunctions remain the same despite the kernel changing - the eigenfunctions are the spherical harmonics. Therefore, $\beta_i$ does not depend on $m$, as it is $\beta_i = \mathbb E_{D_X} \left( f^* \phi_{i}\right)$ and $\phi_{i}$ is a fixed spherical harmonic in $d$ dimensions (i.e. $Y_{ks}$ for some $k$ and $s$; here $d$ is fixed). Therefore $\sum_i \beta_i^2 = S(m)$ is fixed and independent of $m$. Note however that $\mathcal{L}_{i,\delta}$ does depend on $m$ as it depends on the eigenvalues and this is addressed in both the upper and lower bounds (although indirectly for the lower bound). 2. Regarding the eigenlearning framework: In lines 126-129, we discuss a more recent work, [38], in which they provide theoretical justification for eigenframework in the case of kernel regression. Since there are quite a few theoretical works that rely on the eigenlearning framework (including ours), for example [1, 7, 12, 25, 47, 54, 63], and since there is vast empirical evidence to support it, we agree with the reviewer that obtaining further theoretical justification for it is an important topic for future research. The assumption on the target in 98-111: It’s true that as it’s written, it is not yet assumed that $y = f^*(x) +\xi$. However, as discussed in Zhou et al 2023 [63], the results of [47] can be extended to a more general setup. We will edit the paper to clarify this. 3. Regarding the references: Thanks. We will update this section of the paper to include these additional references. --- Rebuttal Comment 1.1: Comment: Could the authors expand on how we know the eigenfunctions are spherical harmonics? Even a reference to another work would be appreciated. Line 100 seems to cite Mercer's Theorem. These notes (https://www.stat.berkeley.edu/~bartlett/courses/2014spring-cs281bstat241b/lectures/20-notes.pdf) by Peter Bartlett seem like a reasonable reference to me. Here the order of implications is for a given kernel there exist eigenfunctions such that... so it looks like the eigenfunctions are dependent on the kernel. --- Reply to Comment 1.1.1: Title: Reply Comment: Yes, we can provide references and explain further. In [43], in section 2 and Theorem 2, it is explained that when we are on $S^{d-1}$, the eigenfunctions of the Gaussian kernel are spherical harmonics. This is shown in their proof of Theorem 2 in equation (24). Namely, using the Funk-Hecke formula (see page 30 of the reference below), one can show that for any inner product kernel on the sphere, the spherical harmonics are the eigenfunctions. The Funk-Hecke formula states that for any spherical harmonic of order $k$, $Y_k$, and any $x\in S^{d-1}$ the following equation holds $\int_{S^{d-1}} f(\langle x,t \rangle) Y_k(t) dS^{d-1}(t) = \lambda_k Y _ k(x)$, where $\lambda_k$ is a real number given by an integral that depends on $f$ (see [43] for the formula). Then, if our kernel $K: X\times X\to \mathbb R$ is an inner product kernel, there is a function $k: \mathbb R\to \mathbb R$ such that $K(x,t) = k(\langle x,t\rangle)$. Then applying the Funk-Hecke formula for $f=k$ would give the desired result. Note that on $S^{d-1}$, we have that $\exp\left(-\frac{\| x-t \|^2}{\sigma^2} \right) = \exp \left( -\frac{2}{\sigma^2}\right) \exp \left( \frac{2 \langle x,t \rangle }{\sigma^2} \right)$, which shows that the Gaussian kernel is an inner product kernel. For the proof of the Funk-Hecke formula, the reviewer can look at the reference given below. The fact that eigenfunctions of inner product kernels are spherical harmonics is frequently used in papers that discuss inner product kernels, such as [4] (Section G). Funk-Hecke formula reference: C. Muller. Analysis of Spherical Symmetries in Euclidean Spaces. Applied Mathematical Sciences 129, Springer, New York, 1997. [43] Minh Ha Quang and Yuan Yao. Mercer’s theorem, feature maps, and smoothing. Conference: Learning Theory, 19th Annual Conference on Learning Theory, COLT, 2006. [4] Daniel Barzilai and Ohad Shamir. Generalization in kernel regression under realistic assumptions. arXiv preprint arXiv:2312.15995v2, 2024.
Summary: The paper investigates the overfitting behavior of Gaussian Kernel Ridge Regression (KRR) in high-dimensional settings, focusing on how the model performance is influenced by the choice of the bandwidth parameter and sample size. They aimed to provide a more comprehensive picture of overfitting with Gaussian KRR by studying the overfitting behavior with varying bandwidth or with arbitrarily varying dimension, including sub-polynomially. In particular, they showed that for fixed dimension, even with varying bandwidth, the interpolation learning is never consistent and generally not better than the null predictor (either the test error tends to infinity or is finite but it is almost always not better than the null predictor). For increasing dimension, they gave an upper and lower bound on the test risk for any scaling of the dimension with sample size, which indicates in many cases whether the overfitting is catastrophic, tempered or benign. They showed the first example of sub-polynomially scaling dimension that achieves benign overfitting for the Gaussian kernel. Additionally, they showed that a class of dot-product kernels on the sphere is inconsistent when the dimension scales logarithmically with sample size. Strengths: The presentation of the paper looks really in good order with adequate clarifications and rich literature reviews. The motivation is clear to me. The proof provided also looks correct in general., They clearly provided an upper and lower bound on the test risk for any scaling of the dimension with sample size, which provides a clear and comprehensive understanding on the issue of overfitting with Gaussian KRR, which is not restricted on particular regimes, while it also discovers new result for sub-polynomially varying dimension. I found the proof technique quite interesting and insightful. Although the contribution is not extremely outstanding or groundbreaking, the results completely reflect the effect of varying bandwidth or dimensionality in a very comprehensive way and it certainly facilitates the understanding of this issue and also provides a very insightful theoretical guidance of picking these crucial parameters. Weaknesses: The first and a clear weakness would be lack of experiments. Since the results in the paper focus on the overfitting behavior with varying bandwidth or with arbitrarily varying dimension, including sub-polynomially, it should benefit a lot and is very instructive to study the real effects guided by the theories on the simulated or real-word datasets. The results should be sensitivity to tuning parameters. The paper highlights the complexity of balancing these parameters but does not provide robust, practical guidelines for optimal tuning. Different data distributions and kernel eigenstructures result in non-uniform impacts of the tuning parameters, complicating the tuning process further. It is better the paper could consider more general date distributions in Assumption 2 or at least can provide some insights on how the results may vary given different date distributions. Technical Quality: 3 Clarity: 4 Questions for Authors: Can you explain the theoretical basis for the catastrophic overfitting observed with certain bandwidth scalings and the choice of kernels intuitively? What are the practical guidelines for choosing the bandwidth parameter in real-world applications since there is no experiment? How does the proposed method perform across different types of data distributions besides Assumption 2? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Below we address the main comments: 1. Regarding the experiments: There is ample empirical evidence about the eigenframework holding well for the Gaussian kernel. For example, Figure 1 in [38] shows the predicted vs. true test risk for Gaussian KRR (data is from MNIST), and Figure 5 in [7] shows the true vs. predicted risk for Gaussian KRR with data uniform on the sphere (which is the setup we consider here). In both of these plots, the prediction matches closely with the true test risk. Furthermore, in the attached PDF we also provide empirical evidence that the claim for the case of fixed dimension holds. We plot the dependence of the test error on sample size for the Gaussian kernel ridgeless (interpolating) predictor. In Figure a, we consider the first case of Theorem 1, namely when the bandwidth $\tau_m = o(m^{-\frac{1}{d-1}})$. It shows that the test error tends to something equal to or larger than the test risk of the null predictor. In Figure b, we consider the second case, i.e $\tau_m = \omega(m^{-\frac{1}{d-1}})$ and the plot shows that the test error increases with sample size, which aligns with the claim that it diverges as the sample size $m$ increases. In Figure c, we consider the case when $\tau_m = \Theta(m^{-\frac{1}{d-1}})$ and the plot shows that for some noise level (which is high enough), the predictor is worse than the null predictor, as predicted by our theorem. We will add these simulations to the final version of the paper. 2. Intuitive explanation for the catastrophic behavior observed: As discussed in [24], the functions learned by the Gaussian kernel are too smooth, so they overfit the noise harmfully. On the other hand, the authors construct a “spiky-smooth” kernel that can exhibit benign overfitting in a fixed dimension. 3. Practical guidelines for optimal parameter tuning: Since the result for fixed dimension is negative, i.e. we show that the overfitting is catastrophic or tempered and that the predictor is almost always worse than the null predictor, it’s not expected that the Gaussian ridgeless regression would be successful in practice. The guideline would be to use an optimally tuned ridge and to avoid interpolation when using Gaussian KRR. 4. Different data distributions: As briefly mentioned in lines 142-145, the same result extends to more general distributions, namely uniform on a manifold that is diffeomorphic to the sphere. So the result holds for more general data distributions as well. Furthermore, considering inner product kernels with respect to the uniform distribution on the sphere is very common in many theoretical works, such as [4,37,38,61], just to mention a few. We agree with the reviewer that extending the results for additional distributions is an interesting future direction. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and the attached PDF. I have increased my scores.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their thoughtful feedback. In the attached PDF, we provide empirical evidence that our predictions are correct. We focus on the finite-dimensional case because of computational constraints. In Figure a, we consider the first case of Theorem 1, namely when the bandwidth $\tau_m = o(m^{-\frac{1}{d-1}})$. It shows that the test error tends to something equal to or larger than the test risk of the null predictor. In Figure b, we consider the second case, i.e $\tau_m = \omega(m^{-\frac{1}{d-1}})$ and the plot shows that the test error increases with sample size, which aligns with the claim that it diverges as the sample size $m$ increases. In Figure c, we consider the case when $\tau_m = \Theta(m^{-\frac{1}{d-1}})$ and the plot shows that for some noise level (which is high enough), the predictor is worse than the null predictor, as predicted by our theorem. We will add these simulations to the final version of the paper. Pdf: /pdf/5b5d18c3aedae45e58b3b09a233564d670d8ff9b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors derive estimates of the population risk of the kernel (ridgeless) regression estimator for spherical data when either the kernel bandwidth or the dimension of the data depends on the number of training points $m$. To do so, authors rely on the eigenframework and prior work, which provides (under assumptions on the distributions of the random variable $\phi(X)$ where $\phi$ is the kernel feature map and $X$ is the input random variable) formulas for the population risk of the kernel ridge regression estimator as a function of the eigenvalues of the kernel integral operator. Using these formulas, authors then derive nonasymptotic estimates of these quantities in order to establish the limit value of the risk, as well as its dependency w.r.t $m$ and $d$. They obtain that - for a fixed dimension, overfitting is always harmfull (tempered or catasrophic) unless the bandwidth scales in precisely in $m^{-1/d-1}$ in which case overfitting is shown to be tempered for large enough noise in the data (Theorem 1) - for a fixed bandwidth, but varying dimensionthey derivelower and upper bound on the population risk. This allows them to - recover known results stating that when the dimension size increases polynomially with sample size, overfitting is benign or tempered - show that when the dependency is logarithmic, overfitting is not benign (under additional assumptions verified for the gaussian kernel) - show that when the dependency is subpolynomial $\exp(\sqrt{m)$ scaling, overfitting can be benign Strengths: The paper is clear, with a good effort spent on litterature review and comparison with prior work. The main technical contributions seem to be bounds on the spectra of dot-product kernels on the sphere, which if novel (I am not an expert of this domain) could be of independent interest and are welcomed. Weaknesses: - This paper investigates the behavior of **ridgeless** (e.g. no RKHS norm penalty) kernel estimators, something which should be made clearer. For example, kernel ridge regression is part of the paper title, whereas the paper does not study kernel ridge regression. - The eigenframework which they use rely on a number of assumptions to be valid. Technical Quality: 3 Clarity: 3 Questions for Authors: In the analysis, the data is assumed to be on the sphere. Can the authors comment on how they expect the results to change for data distributed on the whole of $\mathbb R^d$? Typos: l 294 l 298 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Mentioned in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Below we address the main comments: 1. Ridge vs Ridgeless: We will change the title to use “ridgeless” instead and reword other relevant parts of the paper. 2. Assumptions for the eigenframework: There is ample empirical evidence that the eigenframework holds well for the Gaussian kernel. For example, Figure 1 in [38] shows the predicted vs. true test risk for Gaussian KRR (using MNIST data) and Figure 5 in [7] shows the true vs. predicted risk for Gaussian KRR with data uniform on the sphere (which is the setup we consider here). In both of these plots, the prediction matches closely with the true test risk. 3. Answer to the reviewer's question: The Gaussian kernel will essentially zero out contributions of points that are far away, so as long as the bandwidth is small enough and the sample size is sufficiently large, the d-sphere will “look” the same as $\mathbb R^d$. So we expect that the result stays the same for other measures on $\mathbb R^d$, but we leave this question for further research. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you to the authors for their response. Regarding 2 - reading again that section of the paper, it think it would help guide the reader unfamiliar with this literature if the authors explained how the predicted risk expression is obtained. For instance, quoting [38], "[BCP20] presents two different approaches to obtain this analytical expression: a continuous approximation to the learning curves inspired by the Gaussian process literature [Sol01], and replica method with a saddle-point approximation". A sentence along these lines could be helpful to make this formula less "magic". --- Reply to Comment 1.1.1: Comment: We thank the reviewer for additional comments. We will update the update the paper to include this explanation.
null
null
null
null
null
null
You Don’t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning
Accept (poster)
Summary: This paper examines the question of whether data augmentations, specifically hand-crafted domain-specific augmentations, are needed for self-supervised learning. This question has relevance as the existing state of the art methods use such augmentations in the standard natural image domain, which leaves open the question of what one should do when trying to apply self-supervised learning to a new domain, eg medical imaging. Through experimental analysis, this paper demonstrates that the primary impact of data augmentation is in increasing the effective dataset size, and that with sufficient data, compute, and model complexity, the gap between performance with and without such data augmentations can be largely closed. In particular, the paper shows state of the art results under the condition of not using hand-crafted augmentations, while using a joint-embedding architecture. Strengths: + paper is well-written and well-reasoned, making it easy to follow + the question examined is of interesting fundamental importance, in giving insight into self-supervised learning and what one should take into account when applying SSL to new data modalities + convincing experimental results to support the argument that the main effect of data augmentation in SSL is for increasing effective training set size, and that by scaling data set size, the performance gap with and without hand-crafted augmentations closes Weaknesses: - the experiments were all using DINOv2 - this is a secondary weakness since the paper offers the results as an existence proof that hand-crafted augmentations are not necessary. however, to have a more complete understanding of the effect of such hand-crafted augmentations, it would be interesting to know whether the results found hold in general for SSL, even if the specific learning algorithm/architecture, were changed, to know whether this is a general phenomenon or if there is an aspect that is specific to something in DINO. Technical Quality: 3 Clarity: 4 Questions for Authors: As a more open-ended question, do your results have any implications for reconstruction-based methods for SSL? Ie, in looking at Table 1, considering the proposed method is able to achieve better results than existing reconstruction-based methods, while being similarly constrained to not leverage hand-crafted augmentations, do you have thoughts on why this gap existing, or put another one, what kind of information would need to be added to the reconstruction-based methods to close the gap? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive reception of our work. In the following we will be addressing each of your comments. **W1**) Please refer to the general rebuttal. We add an analysis on MOCOv3 (a more modern version of the SimCLR paper) and explain why our choice is constrained to DINOv2 (the Crop setup was unstable with MOCOv3 and we didn't manage to train a model with it). Our claim might hold for other methods as long as it’s possible to scale them effectively to ImageNet-22k (which is not an easy task). In the meantime, we think that our paper can already impact people training DINOv2 models on other modalities [1,2,3,4,5,6]. **Q1**) It’s an interesting question about insights on reconstruction-based SSL. Our paper shows that the constraint on hand-crafted data-augmentations is not the main factor in the existing gap, but multiple factors can still explain it. First, our model uses DINOv2, which is carefully optimized to work on large scale datasets. I-JEPA, however fails to train good models on ImageNet-22k, but according to the results in our paper, this could be explained by the use of suboptimal hyper-parameters. MAE also fails to scale effectively, in line with a new paper explaining that learning by reconstruction forces the model to learn uninformative features [7]. At the end, the issue might be that we add too much unused information to learn for reconstruction-based methods, and that scaling data increases even more this amount of information. [1] R. J. Chen et al. Towards a general-purpose foundation model for computational pathology [2] H. Xu et al. A whole-slide foundation model for digital pathology from real-world data [3] E. Vorontsov et al. Virchow: A Million-Slide Digital Pathology Foundation Model [4] T. Moutakanni et al. Advancing human-centric AI for robust X-ray analysis through holistic self-supervised learning [5] F. Perez-Garcia et al. RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision [6] J. Tolan et al. Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar [7] R. Balestriero et Y. LeCun, Learning by Reconstruction Produces Uninformative Features For Perception --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback. We will include the discussion to the revised manuscript.
Summary: Traditionally, it is believed that the effectiveness of Joint-Embedding Architectures (JEAs) such as SimCLR lies in their ability to map augmented views of the same image to the same representation in the latent space, thus requiring specific data augmentations that lead to superior downstream performance. Moreover, such augmentations, like blurring and color jittering, are not directly applicable to non-visual data like speech or text, potentially limiting the utility of JEAs to image-only applications. This work challenges the necessity of hand-crafted data augmentations for training Self-Supervised Learning (SSL) models with JEAs. Specifically, the authors provide a rigorous study using DINOV2, demonstrating that effective image representations can be achieved using minimal augmentation—specifically, only cropping without resizing—as long as the dataset is sufficiently large. Further, augmentations mainly serve to expand the effective size of the training dataset rather than enforcing the learning of invariances. Role of augmentations is studied on the three critical scaling dimensions in deep learning: data volume, computational power, and model size. To support these claims, experiments are conducted across a range of dataset sizes, from smaller datasets like ImageNet1k to larger ones such as LVD142, and included multiple model sizes, from small to large. Strengths: **Originality.** A lot of previous works focus on studying the impact of different data augmentations both from a theoretical and empirical perspective. This paper additionally studies this impact at scale, which to the best of my knowledge, is completely novel. It is clear how this work differs from previous contributions, and the analysis shows new insights. **Quality.** The paper is well written and the claims are mostly well-substantiated with extensive experimentation supporting them. **Clarity.** The paper is well-organized and clear, with understandable figures. Different data augmentation approaches and hyperparameter choices are clearly outlined. **Significance.** - The work provides evidence that one could train state of the art SSL approaches with *DINOv2* without on hand crafted data augmentations except cropping and resizing. This could be useful for scaling SSL applications to non-image domains such as time series etc. - The idea of studying the use of data augmentations to enforce invariance through “Shared” augmentation setting is very interesting - The role of compute and non linear gains with changing hyperparameters is important for large scale training and is well highlighted Weaknesses: 1. My major concern is with respect to the usefulness and impact of this work. The authors highlight training of large scale SSL models with just cropping and resizing. As also mentioned in the paper, the key use case of this study would be generalizability of SSL approaches to domains such as medical imaging with totally different channels and characteristics. However, cropping may not be an effective augmentation strategy in these domains—for instance, in medical imaging, key features like cells or tumors often occupy a small portion of the image. The paper lacks evidence with respect to their stated motivation– Does their learnt representation actually generalize better to non-vision domains as compared to the representations learnt through traditional hand crafted augmentations ? 2. Despite using data augmentations such as color jitter in JEPAs, it is unclear if the model is actually learning invariance to them. In some works [1][2], it has been shown that despite training a predictor in the latent space, the model can ignore the augmentations and merely learn invariance. There is no discussion from this perspective, which I believe is very relevant to the work. - [1] Garrido, Quentin, Laurent Najman, and Yann Lecun. "Self-supervised learning of split invariant equivariant representations." arXiv preprint arXiv:2302.10283 (2023). - [2] Gupta, Sharut, et al. "In-Context Symmetries: Self-Supervised Learning through Contextual World Models." arXiv preprint arXiv:2405.18193 (2024). 3. I would like to see more insights into the gaps in performance observed in low and large data regimes. By the claims made in the paper, with long enough training, training without data augmentations should achieve the same performance as its counterpart. However, a gap still remains. More clarity on this would benefit the work. I am willing to change my score if the authors address the above comments. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is “AIM” in Table 1? The abbreviations for algorithms used in Table 1 are not defined or mentioned in the text. 2. “DFN-2B+” is undefined and used in Table 1. What is it? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Mentioned in the Weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for recognizing the importance of scaling SSL on other domains. In the following we extended our experiments, including also other imaging domains, making our conclusions more solid. **W1**) You indicated that the main motivation for this work, which is to allow using SSL models on different domains, was not supported by experimentation. Indeed, generalization to new domains is very important given that multiple papers used DINOv2 on medical imaging, remote sensing, and many other ones, either directly as is or as a pretrained model to finetune [1,2,3,4]. We decided to test (Table 1 in the rebuttal) our models on such domains where invariance toward domain-specific augmentations could be harmful: - logos, signs and product’s packaging where for example text can be degraded by blurring and zooming, and arrows can be inverted if the features are invariant to the flipping; - remote sensing, where colors and zooming can have a big impact on the performances; - medical imaging where classes differ in type (shape vs texture) and size (small nodule, broken ribs, enlarged heart etc.). Table 1 in the rebuttal shows that a model trained without domain-specific augmentations tends to generalize better on most of these domains, notably on products (RP2K and Products-10k), remote sensing (FireRisk) and medical dataset (MIMIC, CheXpert, Vindr and NIH-14). We gained 4.8% acc. on GTSRB when switching from the original DINOv2 to the Crop only one, showing the impact of resizing on this task. On RP2K, Products-10k, and medical imaging datasets, all methods without domain-specific data-augmentations get, in general, better performances, proving that those augmentations were tailored for ImageNet-like tasks but can be detrimental to other ones, as stated in [8]. **W2**) Thank you for the comment. Indeed, models might not learn invariance even when trained with data-augmentations. However, we quantitatively show in Table 3 in the rebuttal that the Original setup is more invariant than the Shared, Crop+Resize and Crop setup toward data-augmentations. For that, we follow the prior work of [5, 6] and use as a metric the average cosine similarity between augmented views of the same image. We also normalize based on the average similarity between random images to take into account potential model variability. Analyzing the quantitative results of our empirical study in Table 3, we observe an interesting pattern, with the invariance decreasing between the Original, Shared, Crop+Resize and Crop settings. It was expected that the most invariant setup would be “Original”, and the least invariant would be “Crop”, but it’s interesting to see that the Shared and Crop+Resize setup have different levels of invariance despite having the same “invariance enforcement” in the loss. A more detailed analysis of data augmentations and equivariance would be a good follow-up to this paper. **W3**) The gap between low and large data regimes cannot be recovered only by training for longer on the smaller dataset. As it is discussed in this study, scaling SSL is harder than only changing the dataset’s size. Increasing the number of epochs alone will lead to overfitting. This happens less with data-augmentations as they artificially increase the dataset size. Increasing the number of data alone will help, but the training will be suboptimal as the model will see each image for a shorter amount of time. Thus, scaling both training data and length is what can make models with and without data augmentations reach the same performances. Another remark about the remaining gap for our 500 epochs setup with LVD-142M with and without augmentations. We shortly mentioned this in the limitations, but here is a more detailed explanation: - Hyperparameters were tuned by DINOv2’s author using all data-augmentations. We used the exact same parameters for all our setups, and thus are probably using suboptimal parameters when removing augmentations. - The larger the model used, the more data we need to close the gap. You can see this phenomenon in the paper, Figure 3 on the right. We might be in a setup (ViT-L) where more data is needed to fill the gap (for example, CLIP ViT-L models are trained using billions of images [7]). - We don’t use data-augmentations in our linear classifier training to remove confounding factors when evaluating our models. Adding data-augmentations here isn’t necessary to reach best performances using the original DINOv2. However, some invariances are actually good on ImageNet-like domains [8], and the ones used in the original setup are tailored exactly for this. The gap might close even more if we optimized data-augmentations during the classifier training, transferring the invariances from the pretraining step to the classification step. Questions: Sorry for the confusion about AIM and DFN-2B+. These abbreviations come from the paper [9]. We will add all citations to Table 1. [1] X. Song, General Purpose Image Encoder DINOv2 for Medical Image Registration [2] M. Baharoon et al. Towards General Purpose Vision Foundation Models for Medical Image Analysis: An Experimental Study of DINOv2 on Radiology Benchmarks [3] B. Cui et al. Surgical-DINO: adapter learning of foundation models for depth estimation in endoscopic surgery [4] X. Bou et al. Exploring Robust Features for Few-Shot Object Detection in Satellite Imagery [5] Q. Garrido et al. Learning and Leveraging World Models in Visual Representation Learning [6] A. Devillers & M. Lefort, EQUIMOD: an equivariance module to improve visual instance discrimination [7] M. Cherti, Reproducible scaling laws for contrastive language-image learning [8] I. Bendidi et al. No Free Lunch in Self Supervised Representation Learning [9] A. El-Nouby et al. Scalable Pre-training of Large Autoregressive Image Models --- Rebuttal Comment 1.1: Comment: Many thanks for conducting additional experiments and providing a detailed rebuttal that addresses many of the weaknesses identified and questions raised. I appreciate the addition of results on medical imaging, remote sensing and logos and signs datasets. I emphasize that all clarifications made during this rebuttal should be added to the revised manuscript to improve clarity of the work. Given the rebuttal addressed most of my concerns, I have increased my score to 5. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback and valuable suggestions that helped us improve our paper. We will of course include the contents of the rebuttal in the camera ready.
Summary: This paper demonstrates the possibility of training JEA SSL encoders using limited augmentations (not 0 augmentations as in the title) compensated by significantly increasing the size of the training data. This is built on the claim that most SSL works have relied on augmentations for SOTA performance. The experiments show that training with crop+resize augmentation alone, with much larger training data can match the performance of ImageNet-1K + hand-crafted augmentations. More experiments demonstrate the benefit of scale and how augmentations' role diminishes as scale increases. Strengths: - The paper is very well written and easy to follow. Motivation and experimental results/takeaways are presented in a clear format. - The case study, while only focusing on DINOv2 and iBOT, is thorough with enough supporting ablations to the claims - Discussion on the role of scale throughout the paper is appreciated and certainly useful to conduct further research Weaknesses: - Misleading title - The title "you don't need data augmentations" is certainly misleading since cropping (an augmentation) is still applied under a large dataset regimes. The authors have also not demonstrated results on other methods to make a general claim about the un-usefulness of augmentations in SSL as a whole. - (Crop + Resize) and (Crop) are both still augmentations - I am skeptical about the main claims because I disagree with the main assumption that "crop" and "crop+resize" (1) do not increase the dataset size and (2) do not induce invariance. In "Crop" randomly removing 32 pixels (256 - 224) from the height and width, very much changes the spatial information compared to the original image and certainly induces an invariance when used in multi-view SSL. Have the authors studied the impact of reducing cropping strength? What happens if you resize to say 230 and crop to 224? With the above concern in mind, Crop+Resize is an even stronger augmentation and surely increases the dataset size artificially which questions several explanations of results (for example - line 220). A simple visual examination of RandomResizedCropped ImageNet-1K samples could show several low-overlapping copies of the same image since the main content need not always be a single object at the center of the image. - Masking is also an augmentation - Similar to the above discussion on cropping, masking is also a form of augmentation because it fundamentally alters input information and training in such a manner forces an invariance to such alterations. - Need results on other models - While the studies on DINOv2 are extensive, the bold claim of not needing augmentations for SSL needs to be studied across the realm of SSL because it contradicts the SimCLR paper (as discussed by authors in Introduction section) and a recent work [1]. [1] Morningstar et al, Augmentations vs Algorithms: What Works in Self-Supervised Learning Technical Quality: 2 Clarity: 4 Questions for Authors: See weaknesses section Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Discussed but more raised in the Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for recognizing the importance of the role of scale in self supervised learning and for their questions that made us think more deeply about the semantics and experiments we used throughout our paper. In the following we will be addressing each of your comments. **W1**) Please refer to the general rebuttal. Thanks to your comment, the title of our paper will be changed to: “Self-supervised learning doesn’t need domain-specific data augmentations at scale” to highlight more the claims of the paper. **W2-3**) Hand-crafted data-augmentation is the term used in the original I-JEPA paper [1] to talk about most augmentations but masking and RandomResizedCrop. Hand-crafted data augmentations are mentioned in I-JEPA as augmentations that can’t be used on other modalities (thus hand-crafted for the specific modality in use). In our paper, we used the same term but included the Resizing as a hand-crafted augmentation as it isn’t usable in text or modalities where resampling is impossible or doesn’t make sense. We propose to change “hand-crafted” to “domain-specific” to make it more explicit. We think that this change in semantics (we used hand-crafted data augmentation in the paper but not the original title) answers both of your points about the boldness of our claim (that was, we think, only in the title) and your question about the fact that cropping and masking are augmentations. We will also make sure to update the paper with the correct wording when necessary. To support our claims, we actually don't need our model to be trained with *exactly* zero invariance or data augmentations. A lot of papers are claiming that data augmentations are necessary to achieve good performance with classical SSL [1,2,3] and that it’s not the case for reconstruction-based methods like MAE, I-JEPA or AIM [4,5]. However, MAE, I-JEPA, and AIM still use data augmentations (masking and RandomResizedCrop). It’s also worth noting that all the data-augmentation ablations in [1] are only done on what we call “domain-specific augmentations” and that all their setup include RandomResizedCrop. We show that their results don't hold at scale with DINOv2, which we think is novel and interesting. We claim that we can remove all domain-specific augmentations and still train a very powerful model, using fewer augmentations than all the other methods in the literature. When we analyze the impact of data-augmentation and invariance (Table 3 in the rebuttal), we don’t need it to be exactly zero for our claims to hold. We just need the Shared and Crop+Resize setup to enforce less invariance than the Original setup. The performance gap that is created by enforcing less invariance is compensated by increasing the amount of data, showing that the main bottleneck was the artificial increase of data, not the invariance making. The impact of the RandomCrop and masking augmentations doesn’t change between all our setups, and we only analyze the difference relative to all the other augmentations. We also want to add that we have a table in the original paper (Table 2) that ablates the masking augmentation, training a strong model with only one augmentation (RandomCrop, without resizing and without masking). This model still beats all the other baselines from Table 1 in our paper by a large margin. **W4**) Please refer to the general rebuttal. We add an analysis on MOCOv3 (a more modern version of the SimCLR paper) and explain why our choice is constrained to DINOv2 (the Crop setup was unstable with MOCOv3 and we didn't manage to train a model with it). Our claim might hold for other methods as long as it’s possible to scale them effectively to ImageNet-22k (which is not an easy task). In the meantime, our paper can already impact people training DINOv2 models on other modalities [6,7,8,9,10,11]. [1] Morningstar et al, Augmentations vs Algorithms: What Works in Self-Supervised Learning [2] T. Chen et al. A Simple Framework for Contrastive Learning of Visual Representations [3] J.-B. Grill et al. Bootstrap your own latent: A new approach to self-supervised Learning [4] M. Assran et al. Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture [5] A. El-Nouby et al. Scalable Pre-training of Large Autoregressive Image Models [6] R. J. Chen et al. Towards a general-purpose foundation model for computational pathology [7] H. Xu et al. A whole-slide foundation model for digital pathology from real-world data [8] E. Vorontsov et al. Virchow: A Million-Slide Digital Pathology Foundation Model [9] T. Moutakanni et al. Advancing human-centric AI for robust X-ray analysis through holistic self-supervised learning [10] F. Perez-Garcia et al. RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision [11] J. Tolan et al. Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful questions. To recapitulate our paper, we summarize our contributions: - We can train a powerful and scaled self-supervised model with no domain-specific data-augmentations, in contrast to all alternative approaches. This model provides an existence proof that it is possible to train a model with a similar performance to its domain-specific data-augmented counterpart. This is true only at the ImageNet-22k scale. - We reproduce the results of prior literature [1,2,3] at the ImageNet-1k scale, showing that domain-specific data augmentations are necessary (if the amount of data is too small). We conclude that SSL does not rely on enforcing representational invariance to domain-specific data augmentations in order to perform well: it is possible to obtain strong models (within 1% of the state-of-the-art) as long as we scale training data. We never claimed that data-augmentations were un-useful, nor that removing them would lead to better performances on all benchmarks (we actually get a performance boost on some domains and show this in rebuttal table 1). In this work, we discuss that we don’t need most of them to get great performances at scale, disproving the belief that they are the key to modern SSL. Given that our title might be less specific than the claims in our paper, we will rename our article “Self-supervised learning doesn’t need domain-specific data augmentations at scale”. Our conclusion doesn’t contradict but extends the prior knowledge present in the SSL literature (e.g. the SimCLR work [2], and more recent work such as Morningstar et al. [1]), proving that enforcing domain-specific invariances is not the core learning mechanism of SSL, and that dataset scale was the issue. The data augmentations that we use, i.e. cropping and masking, are very general by nature, suggesting it is possible to extend SSL approaches (in particular DINOv2) to other domains, given proper scale, while data-augmentation can also be harmful to specific domains or classes [4]. The reviewers notably complained that our experiments were limited to only the DINOv2 and DINO methods. There are several reasons for this: - Our claim about not needing domain-specific augmentations only works with larger scale datasets, but to the best of our knowledge, DINOv2 is the only method that scales to the ImageNet-22k scale. We provide additional experiments in Table 2 of the rebuttal, where we trained MOCOv3 [5] (an improved version of SimCLR) on ImageNet-1k and -22k, with and without domain-specific data augmentations. First, we see that we obtain the same results as DINOv2 when training on ImageNet-1k: removing augmentations has a big impact. We also observe that the method does not scale well to more data - at least not with the hyperparameters provided by the authors - and that ImageNet classification performance decreases when using the larger dataset. The same scaling issue happens with the official I-JEPA models [6] that we tested in Table 1 of our paper. - Scaling SSL to more data is harder than just changing the dataset. Tweaking hyperparameters to scale up a method is extremely compute-intensive (3-6M GPU-hours for DINOv2 [7]), and we believe that this is not necessary to support the claim that SSL does not need domain specific augmentations to get good performances: a single existence proof, that we provide in this work, appears sufficient. Given the popularity of DINOv2 as the state-of-the-art self-supervised methods on different domains [8,9,10,11,12,13], we think our work can have an important impact on more powerful and scaled SSL methods. In the Rebuttal Table 1, we analyze the generalization capabilities of our different data-augmentation setups and show that removing augmentations can be beneficial in some domains. We also add results in Rebuttal Table 3 to quantify the invariance toward augmentation between the different models, proving that removing data-augmentations indeed reduce the learned invariance. [1] Morningstar et al, Augmentations vs Algorithms: What Works in Self-Supervised Learning [2] T. Chen et al. A Simple Framework for Contrastive Learning of Visual Representations [3] J.-B. Grill et al. Bootstrap your own latent: A new approach to self-supervised Learning [4] I. Bendidi et al. No Free Lunch in Self Supervised Representation Learning [5] X. Chen et al. An Empirical Study of Training Self-Supervised Vision Transformers [6] M. Assran et al. Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture [7] M. Oquab et al. DINOv2: Learning Robust Visual Features without Supervision [8] R. J. Chen et al. Towards a general-purpose foundation model for computational pathology [9] H. Xu et al. A whole-slide foundation model for digital pathology from real-world data [10] E. Vorontsov et al. Virchow: A Million-Slide Digital Pathology Foundation Model [11] T. Moutakanni et al. Advancing human-centric AI for robust X-ray analysis through holistic self-supervised learning [12] F. Perez-Garcia et al. RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision [13] J. Tolan et al. Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar Pdf: /pdf/3af1debea6d26eed95441267f0835965797b279c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models
Accept (poster)
Summary: The paper introduces RouterDC, a novel LLM query router that employs contrastive learning losses to train an encoder and LLM embeddings for routing queries efficiently. The motivation behind this approach is the variability in LLM performance across tasks and domains, and the computational efficiency of routing over ensemble methods. The proposed methodology involves using a small LLM encoder, `mDeBERTaV3-base`, and learnable LLM embeddings for candidate LLMs. RouterDC employs a supervised learning approach with ground truth annotations, using contrastive losses to optimize the routing mechanism. Experimental results demonstrate that RouterDC outperforms existing routing methods on both in-distribution and out-of-distribution tasks. Strengths: - The paper addresses a significant challenge in LLM utilization, focusing on efficient routing to optimize performance across different tasks and domains. - The contrastive learning approach for query routing is innovative, leveraging both *sample-LLM contrastive loss* and *sample-sample contrastive loss* to enhance training stability and performance. - Experimental results show that RouterDC significantly outperforms existing routing methods, indicating the effectiveness of the proposed approach. Weaknesses: - The parameter efficiency of RouterDC compared to existing routing methods is not addressed, which is important for understanding the scalability of the approach. For example, the training cost is likely significant when the number of LLMs scales up (each training query needs to be evaluated on each LLM), and retraining is required when any new LLMs are incorporated. This could impact the practicality of the system. - The paper does not incorporate LLM costs into the loss function. When multiple LLMs perform well (which is the motivation of the work), it would be natural to choose the cheapest one to enhance cost efficiency. - Table 3 shows that removing certain LLMs, such as `Chinese-Mistral-7B` or `dolphin-2.6-mistral-7b`, does not affect overall performance, and removing `Mistral-7B` even improves it. This indicates that including incapable LLMs can lead to unnecessary computational overhead and performance degradation. There is a lack of analysis for this issue, which can be crucial for an effective routing system in the real world. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Table 1 lacks a comparison between different routing methods’ training time or training computational cost. How does RouterDC compare in terms of parameter efficiency with existing methods? This information is crucial for understanding the scalability of the approach. 2. The performance improvements after incorporating sample-sample contrastive loss seem marginal. Can this be further analyzed or explained? 3. On OOD tasks, RouterDC performs worse than the best-performing LLM on certain individual tasks. Is there any analysis of the reasons? 4. Evaluation results on specialized routing benchmarks like [1] could provide a more comprehensive assessment of RouterDC's performance. 5. A normalized average score, instead of the simple average among the scores of different benchmarks, can provide a better comparison in Tables 1 and 2, considering the varying complexity and score scales across benchmarks. 6. There is no need to include RandomSelect as a weak baseline. 7. In the "Routing to Different Numbers of LLMs" evaluation, why were LLMs added in the chosen order? Based on Table 1, adding them in performance-descending order might yield smaller accuracy enhancements. 8. Based on Table 3, the incorporation of certain incapable LLMs can lead to unnecessary computational overhead and performance degradation. Would an LLM-dropping mechanism improve overall performance? 9. The visualization of query embeddings from different benchmarks is confusing. Did you cluster queries within one benchmark or across several benchmarks in the sample-sample contrastive learning loss function? --- [1] Hu, Qitian Jason, Jacob Bieker, Xiuyu Li, Nan Jiang, Benjamin Keigwin, Gaurav Ranganath, Kurt Keutzer, and Shriyash Kaustubh Upadhyay. "ROUTERBENCH: A Benchmark for Multi-LLM Routing System." arXiv preprint arXiv:2403.12031 (2024). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The broader implications of this routing approach, including its scalability and cost efficiency in practical applications, should be more adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the detailed and positive comments. We carefully address your concerns below. Please let us know if you have any follow-up questions. --- > Q1. parameter efficiency, scalability, training cost is likely significant when the number of LLMs scales up (each training query needs to be evaluated on each LLM), and retraining is required when any new LLMs are incorporated. > > Table 1 lacks a comparison ... training time ... parameter efficiency **A1.** Great suggestions! We compare RouterDC with other routing methods on the number of parameters and training time in Table R4 of the `Rebuttal PDF`. As shown, all methods are very efficient in both computation and parameters, i.e., only require about **28 minutes** of training time and have only **86M parameters**. Moreover, RouterDC is **data-efficient** in training. Though each training query needs to be evaluated on all candidate LLMs, the training set can be very small. Figure R2 in the `Rebuttal PDF` shows the performance of RouterDC with different numbers of training samples per task. We can see that the testing accuracy saturates quickly, indicating that **a small number of samples is sufficient** for learning the router. Moreover, with only 30 samples per task, RouterDC already outperforms the previous SOTA overall (57.21 vs 55.77). Thus, the total number of runs to query LLMs is affordable. As RouterDC requires very little training time, retraining the router when new LLMs are incorporated would not be a practical issue. Moreover, learning the router incrementally without retraining is also practical for future work. We will include computation and data efficiency analysis in the revision. --- > Q2. incorporate LLM costs into the loss function. > > Evaluation results on RouterBench **A2.** Thanks for your suggestions. Our primary focus is training a router to select suitable LLMs for queries. Hence, performance is used as the main metric for learning the router. To resolve reviewer's concerns, we further conducted experiments on two tasks of RouterBench (i.e., GSM8K and MBPP) and considered the LLM costs. We modify the score $s_i^{(t)}$ to $s_i^{(t)}+c_i^{(t)}$, where $c_i^{(t)}$ is the cost of query $x_i$ using the $t$th LLM. Figure R3 in the `Rebuttal PDF` shows that RouterDC is more cost-effective than CosineClassifier and ZOOTER in both tasks. --- > Q3. including incapable LLMs can lead to unnecessary computational overhead and performance degradation. > There is a lack of analysis for this issue. **A3.** Thanks for your insightful comments. We agree that incapable LLMs are unnecessary. For example, Figure 9 shows that very few queries are routed to Chinese-Mistral-7B and dolphin-2.6-mistral-7b, thus, removing them can reduce computations without sacrificing performance. In practice, one can use a hold-out validation to analyze the usage of candidate LLMs and off-load those LLMs that are rarely used. --- > Q4. improvements after incorporating sample-sample contrastive loss seem marginal. Can this be further analyzed or explained? **A4.** Sorry for the confusion caused by Figures 5 and 7 due to the large y-axis range. In fact, RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$ + $\mathcal{L}\_\text{sample-sample}$) achieves better average accuracy than RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$) by a significant margin of $2.02\%$ (Table 4 in Appendix A, $\lambda=1$ vs $\lambda=0$). We will clarify this in the revision. --- > Q5. On OOD tasks, RouterDC performs worse than the best-performing LLM on certain individual tasks. Is there any analysis of the reasons? **A5.** OOD tasks are much more challenging. Though RouterDC fails to achieve the highest accuracy on all OOD tasks, RouterDC can assemble complementary abilities of the candidate LLMs and achieve the best overall performance. Specifically, RouterDC performs comparably to the best candidate LLMs on all tasks (i.e., 38.81 vs 39.71 for PreAlgebra, 46.80 vs 47.34 for MBPP, and 51.93 vs 52.01 for C-EVAL). Moreover, RouterDC outperforms existing routing methods by a large margin. We will add this discussion to the revision. --- > Q6. comparison of normalized average score. **A6.** Thanks for your insightful suggestion. We normalize the score of method $\mathbb{A}$ by $$\frac{\text{Acc on task t using method $\mathbb{A}$}}{\text{Acc on task t using dolphin-2.9-llama3-8b}} \times 100\\%.$$ Tables R5 and R6 in the `Rebuttal PDF` report the normalized scores for the ID and OOD scenarios, respectively, showing that RouterDC outperforms existing routing methods by a large margin. --- > Q7. no need to include RandomSelect **A7.** Thanks for your suggestion. We will remove it accordingly in the revision. --- > Q8. In the "Routing to Different Numbers of LLMs" evaluation, why were LLMs added in the chosen order? Based on Table 1, adding them in performance-descending order might yield smaller accuracy enhancements. **A8.** Thanks for your comments. The adding order of LLMs in Figure 13 is the same as the order of candidate LLMs (top to bottom) in Table 1. We agree that the order will affect the accuracy improvement, e.g., adding an incapable LLM will yield small or no improvement. --- > Q9. Would an LLM-dropping mechanism improve overall performance? **A9.** Good suggestion. As incapable LLMs are unnecessary, one can greedily drop them according to validation performance. For example, by dropping Mistral-7B and Chinese-Mistral-7B, the average accuracy increases from 58.54 to 58.67. --- > Q10. Did you cluster queries within one benchmark or across several benchmarks? **A10.** Sorry for the confusion. We cluster all training queries from **several** benchmarks. We will clarify this in the revision. --- > Q11. broader implications: scalability and cost efficiency **A11.** Please see our reply to Q1 and Q2. --- Rebuttal 2: Comment: Thank you for the detailed responses. I am happy to vote for acceptance. Please ensure that these discussions and results are included in your revision. However, after re-reading the paper, I noticed that it currently lacks a comparison or at least a discussion of the existing cascade-based approaches [1-4] for assembling LLMs. Unlike ensemble-based methods, cascade approaches do not necessarily invoke all models and can also achieve good cost-performance trade-offs as routing approaches (indeed, they may invoke multiple LLM calls for one query, but this does not necessarily mean that their cost-efficiency is worse). Including such a discussion or comparison would provide a more comprehensive understanding of your work within the context of existing research. [1] Model Cascading: Towards Jointly Improving Efficiency and Accuracy of NLP Systems, EMNLP 2022 [2] Language Model Cascades: Token-level Uncertainty and Beyond, ICLR 2024 [3] Large Language Model Cascades with Mixture of Thoughts Representations for Cost-efficient Reasoning, ICLR 2024 [4] Online Cascade Learning for Efficient Inference over Streams, ICML 2024 --- Rebuttal Comment 2.1: Title: Reply to Reviewer 4ok1 Comment: Thanks again for your positive rating. We will certainly add the above discussions and experiments to the revision. --- > Q12. However, after re-reading the paper, I noticed that it currently lacks a comparison or at least a discussion of the existing cascade-based approaches [1-4] for assembling LLMs. **A12.** We have discussed a cascade-based method (i.e., FrugalGPT) in the related work of the paper (Lines 75-77, Lines 182-184). Thanks for bringing the four other cascade-based methods to our attention. We agree that cascade-based methods are one direction to achieve cost-effectiveness when choosing LLMs, but they are different from RouterDC in terms of **settings, inference cost, and tasks**. We discuss the differences between our RouterDC and the mentioned cascade-based methods below. (i) RouterDC considers **a different setting** compared with the cascade-based methods [1-4]. The cascade-based methods usually **assume that the capacity of LLMs depends on the model size**. Their intuitive idea is to query LLMs from weak (small) to strong (large) until a satisfactory answer is obtained instead of calling the strong LLMs for all queries. Our RouterDC does not require this assumption and can select a suitable LLM from multiple small or large candidate LLMs. Hence, routing-based methods are more general. Furthermore, even if LLMs are of the same size, they may have different specialized capabilities. (ii) Cascade-based methods [1-4] may call LLMs **multiple times** for a query (in the worst case, all candidate LLMs need to be called), but our RouterDC only needs to call the selected LLM **once** in inference/testing. (iii) RouterDC is general and can be used for **generation tasks**, but Model Cascading [1] and Online Cascade Learning [4] are limited to **classification task** (e.g., SST2, MRPC, IMDB). Generation tasks are usually more useful and challenging than classification tasks in NLP. We will include the above discussion and related works in the revision. --- #### References [1] Model Cascading: Towards Jointly Improving Efficiency and Accuracy of NLP Systems, EMNLP 2022 [2] Language Model Cascades: Token-level Uncertainty and Beyond, ICLR 2024 [3] Large Language Model Cascades with Mixture of Thoughts Representations for Cost-efficient Reasoning, ICLR 2024 [4] Online Cascade Learning for Efficient Inference over Streams, ICML 2024
Summary: The authors propose a routing between different LLMs, based on classification of embeddings from a fine-tuned transformer model (mDeBERTaV3-base), on a per-query basis. To sharpen the classification, they chose positive and negative samples among the training tasks, and to stabilise training they add a loss term that encourages cohesive clusters. The results suggest that the system is capable of routing appropriately. Strengths: * Experiments use a realistic pool of local LLMs * Practical to implement - though makes most sense if the upstream LLMs are being paid for per-token, rather than by availability Original Rating: 4 Original Confidence: 3 Weaknesses: * Since the LLMs in the pool appear to be pretty decisively good at some of the tasks (Figure 9), the meat of the router task is to see whether it's possible to classify these different tasks via a small LM and embeddings. On its face, it doesn't seem too remarkable that this works. * Sample-Sample Contrastive Loss : Smells like a post-experiment fix, rather than a principled choice + Particularly since a pretrained (frozen?) mDeBERTaV3-base is used to determine which samples 'belong' to which clusters Minor point(s) * L30: "Figure 3 shows the scores of seven LLMs for an example query, where the top three LLMs have significantly higher scores." + Seems unclear whether top-3 is so different from top-4 : Maybe a better example could be used Technical Quality: 3 Clarity: 3 Questions for Authors: * For the "OOD" results, how disjoint (really) are (i) CMMLU and C-EVAL; (ii) HumanEval and MBPP? * Table 3: Robustness of RouterDC to LLM losses during inference - isn't this proving that (for instance) Meta-Llama3-8B is being consistently chosen for CMMLU? * L46: "To improve the training stability, we cluster the training queries" - intra-group vs inter-group (within batch?) + L225: The number of clusters N is set to 5 : Is it a coincidence that this matches the number of training tasks? - Appendix B : Is it troubling that task-identity is roughly the same as pretrained cluster labels? Perhaps the task prompts are a give-away... - L257: "Moreover, increasing N leads to higher average accuracy when N is small (≤ 4), but the accuracy saturates quickly." (ditto) * How could this approach work in a chat context? Is it just for single queries? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: * The router being learned here is a task-wise classifier. There is likely commercial value in having this kind of routing, but implementing/testing this doesn't seem particularly novel (apologies for the vagueness). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your efforts and useful comments. We take all comments seriously and hope that our reply can resolve your concerns. Please let us know if you have any follow-up questions. --- > Q1: the meat of the router task is to see whether it's possible to classify these different tasks via a small LM and embeddings > > The router being learned here is a task-wise classifier. **A1.** Sorry for the confusion. Though Figure 9 shows that RouterDC can route most of the queries from the same task to the same LLM, RouterDC is a query-based router instead of a task-wise classifier. - RouterDC is **not a task-wise classifier** that selects an LLM for each task. The performance of a task-wise classifier is bounded by that of the top-performing LLM. However, Table 1 shows that RouterDC can beat the top-performing LLMs on GSM8K, ARC-C, and HumanEval, suggesting that RouterDC is not simply a task-wise classifier. Furthermore, Figure 9 shows that RouterDC does not always route all queries from the same task to the same LLM. For example, RouterDC assigns 96% and 4% of GSM8K queries to MetaMath-Mistral-7B and dolphin-2.9-llama3-8b, respectively. - RouterDC is **a query-based router.** All training queries are merged together to learn the router. At the testing stage, the learned router assigns the testing query to the suitable LLM based on the similarity between the query and LLM embeddings. Both sample-LLM and sample-sample losses do not require the task identity. - The previous work **LoraRetriever (ACL 2024) is exactly a task-wise classifier**. As the task identity is unavailable in practice, the cluster label is used instead. Tables 1 and 2 show that LoraRetriever (clustering) is worse than RouterDC, indicating that RouterDC routes queries more effectively. --- > Q2: Sample-Sample Contrastive Loss : Smells like a post-experiment fix, rather than a principled choice. > ... > mDeBERTaV3-base is used to determine which samples 'belong' to which clusters **A2.** We agree that the sample-sample loss is designed to deal with the training instability observed in early experiments. Contrastive learning is an effective technique to retain the semantic similarity of sentences in the embedding space (SimCSE, EMNLP 2021; Sentence-T5, ACL 2022). Hence, we introduce the sample-sample loss to encourage the encoder to generate similar embeddings for semantically similar queries. Note that at inference (testing), RouterDC does not need to cluster queries. --- > Q3. Seems unclear whether top-3 is so different from top-4. **A3.** Thank you for pointing it out. We will fix it in the revision: "the top-three LLMs have significantly higher scores than the bottom-three LLMs". --- > Q4. how disjoint (really) are (i) CMMLU and C-EVAL; (ii) HumanEval and MBPP? **A4.** Thanks for the question. (i) A detailed comparison between **CMMLU and C-EVAL** is given in Appendix A of the CMMLU paper (arXiv:2306.09212), showing that they have different distributions and contain only 74 shared samples (**about 1%** of the CMMLU dataset). (ii) We check the overlap between **HumanEval and MBPP** by string matching and find that they are **completely disjoint**. --- > Q5. isn't this proving that (for instance) Meta-Llama3-8B is being consistently chosen for CMMLU? **A5.** Yes, Meta-Llama3-8B is consistently chosen for CMMLU queries unless missing since it performs significantly better than other candidate LLMs on CMMLU (Table 1), confirming that the learned router can route queries to suitable LLMs. --- > Q6. L46: "we cluster the training queries" - intra-group vs inter-group (within batch?) **A6.** Sorry for the confusion. We cluster **all** the training queries into several groups. At each iteration, for a query, we sample its in-group query and out-group queries from the same mini-batch. We will clarify this in the revision. --- > Q7. Is it a coincidence that N=5 matches the number of training tasks? **A7.** Sorry for the confusion. The number of clusters $N$ does not have to be the number of training tasks. We have conducted an experiment in the paper to study the sensitivity of $N$ (Lines 255-258). As shown in Figure 6, **RouterDC is insensitive to a wide range of $N\in [4, 9]$**, where the number of tasks is 5. In practice, we can choose $N$ by grid search using K-fold cross-validation. We will clarify this in the revision. --- > Q8. Appendix B : Is it troubling that task-identity is roughly the same as pretrained cluster labels? Perhaps the task prompts are a give-away. > > "increasing N ... saturates quickly." (ditto) **A8.** Sorry for the confusion. To study the relationship between task-identity and cluster labels, we construct their confusion matrix in Table R1 of the `Rebuttal PDF`. As shown, some task identities are different from cluster labels. For example, the HumanEval queries are grouped into three clusters. We also construct the confusion matrix when $N=4$ and $N=9$ in Tables R2 and R3 of the `Rebuttal PDF`, respectively. Again, queries from the same task can be grouped into different clusters and a cluster can be shared across different tasks. We will add this discussion to the revision. --- > Q9. How could this approach work in a chat context? Is it just for single queries? **A9.** Great suggestion! Though RouterDC is designed as a query-based router, the framework can be extended to the chat context, e.g., selecting LLMs based on the recent conversation. We will study this in our future work. --- > Q10. doesn't seem particularly novel **A10.** Novelty of RouterDC includes **two contrastive losses**: (i) the sample-LLM loss pulls the query embeddings closer to the embeddings of the top-performing LLMs while pushing them away from the embeddings of the bottom-performing LLMs; and (ii) the sample-sample loss for training stability. The novelty is also recognized by Reviewers cnpT (**novel**) and 4ok1 (**innovative**). --- Rebuttal Comment 1.1: Comment: * A1+A8 - It was already clear that you are doing query-based rather than task-based classification. One of my concerns was that for the data you trained/tested over, the two things were so closely matched that the different datasets would leak the task just due to the wording, etc. Your Table R2 helps. * A4 : I'm surprised that there are *any* strict overlaps between the datasets : Interesting! But that doesn't change the point that the the claim that the alternate datasets are 'OOD' seems like an overreach. I'm happy to update a little: New Rating: 5 New Confidence : 4 --- Reply to Comment 1.1.1: Title: Reply to Reviewer GGL7 Comment: Thanks for your further comments and **raising the score**. For the remaining concerns, we address them as follows. --- > Q11. One of my concerns was that for the data you trained/tested over, the two things were so closely matched that the different datasets would leak the task just due to the wording, etc. **A11.** We understand the reviewer’s concern that certain words in the query may leak the task identity, making it easy for RouterDC to perform like a task classifier. To resolve this concern, we conducted an additional experiment in **a single task setting**, i.e., we train the router on the training set of HumanEval and evaluate it on the testing set. The single task setting is an edge case where **all queries may contain the same task information**. Hence, the router needs to learn how to route queries appropriately based on the query itself instead of some possible task information contained in the query. Table below reports the testing accuracy. As can be seen, **RouterDC largely outperforms the best candidate LLM (i.e., dolphin-2.9-llama3-8b) and existing routing methods**, demonstrating that the router can select appropriate LLMs for queries based on query characteristics. We will add the experiment and discussion to the revision, which definitely improve our work. \begin{array}{lc} \hline \text{Method} & \text{HumanEval} \newline \hline \text{Mistral-7B} & 28.98 \newline \text{MetaMath-Mistral-7B} & 29.80 \newline \text{zephyr-7b-beta} & 22.04 \newline \text{Chinese-Mistral-7B} & 21.43 \newline \text{dolphin-2.6-mistral-7b} & 45.10 \newline \text{Meta-Llama-3-8B} & 26.73 \newline \text{dolphin-2.9-llama3-8b} & 49.39 \newline \hline \text{ZOOTER} & 39.38 \newline \text{CosineClassifier} & 52.45 \newline \text{RouterDC} & \mathbf{56.32} \newline \hline \end{array} --- > Q12. I'm surprised that there are any strict overlaps between the datasets : Interesting! **A12.** Thanks for your comments! We guess there was a typo in the comment, it should be "there are **NOT** any strict overlaps between the datasets : Interesting!" > Q13. But that doesn't change the point that the the claim that the alternate datasets are 'OOD' seems like an overreach. **A13.** Thanks for your insightful comments! We appreciate the reviewer raising the concern about the definition of OOD. In the paper, C-EVAL and MBPP are treated as OOD tasks as they are different task distributions or question-answer instruction. We briefly summarize their differences below. (i) CMMLU and C-EVAL have **different task distributions**. CMMLU contains more culture- and region-related tasks, while C-EVAL have more STEM tasks. Moreover, CMMLU and C-EVAL use **different prompts** to ask multiple-choice question. C-EVAL uses continuous underscores to indicate the answer's location, whereas CMMLU employs no special notation for referencing the answer, except for brackets when the answer is within a sentence. (ii) HumanEval and MBPP assess the code generation proficiency of LLM from **two distinct perspectives**. A HumanEval query **gives the header** of a python function and some comments, requiring the LLM to **implement the rest** of the function. On the other hand, a MBPP query **gives an intent** and asks the LLM to **generate the function from scratch**. To further resolve this concerns, we evaluate the learned router on **one more OOD task: JavaScript** [R1], which aims to generate JavaScript code to solve problems. Different from HumanEval, which generates Python code to solve problems, JavaScript can be viewed as **a distant OOD task**. Table below reports the testing accuracy. As can be seen, RouterDC outperforms existing routing methods by a large margin, demonstrating that our RouterDC is more effective in routing queries of the distant OOD task. We will include the additional experiments and discussions in the revision. \begin{array}{lc} \hline & \text{JavaScript} \newline \hline \text{Mistral-7B} & 29.88 \newline \text{MetaMath-Mistral-7B} & 31.83 \newline \text{zephyr-7b-beta} & 11.71 \newline \text{Chinese-Mistral-7B} & 17.68 \newline \text{dolphin-2.6-mistral-7b} & 45.00 \newline \text{Meta-Llama-3-8B} & 37.07 \newline \text{dolphin-2.9-llama3-8b} & 53.84 \newline \hline \text{ZOOTER} & 41.64 \newline \text{CosineClassifier} & 37.32 \newline \text{RouterDC} & \mathbf{48.66} \newline \hline \end{array} --- ### references [R1] CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X. KDD 2023.
Summary: This paper studies the problem of assembling off-the-shelf LLMs to harness their complementary strengths. The authors propose a novel query-based router by Dual Contrastive learning (RouterDC), i.e., a sample-LLM contrastive loss and a sample-sample contrastive loss. The former contrastive loss aims at training the router such that it can assign suitable LLMs for queries, while the latter is for training stability. Experiments on various challenging tasks demonstrate that RouterDC performs better than existing routing methods and individual top-performing LLMs in both in-distribution and out-distribution settings. Strengths: 1. The proposed dual contrastive losses for training a query-based router are novel. The sample-LLM contrastive loss seems more sound for learning the router than the KL loss used in previous work ZOOTER. 2. Extensive experiments on both in-distribution and out-distribution tasks show that the proposed RouterDC outperforms existing routing methods on average. The performance shown in Fig.2 confirms that RouterDC can harness the complementary strengths of off-the-shelf LLMs. 3. The paper is well-written and easy to follow. It provides many comprehensive ablation experiments, e.g., Fig. 11 shows the advantage of the sample-sample contrastive loss in improving the training stability of RouterDC. Weaknesses: 1. In Line 157, the authors claim that “The reason is that some similar queries can have dissimilar embeddings and may be routed to different LLMs.” Evidence should be provided to support this claim. 2. For the OOD setting (Table 2), RouterDC fails to beat the best individual LLMs on all tasks (e.g., 38.81 vs 39.72 on PreAlgebra). 3. RouterDC may require a large amount of labeled data to train the router. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How about the performance of RouterDC without the sample-sample contrastive loss? 2. In Fig. 9, it seems that no samples are routed to the Chinese-Mistral-7B model, why? 3. Can you visualize the embeddings of training samples extracted by the encoder $\mathcal{E}(x;w)$ of RouterDC? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors are encouraged to discuss limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the detailed and positive comments. We take all comments seriously and do our best to address every concern raised. Please let us know if you have any follow-up questions. > Q1. In Line 157, the authors claim that “The reason is that some similar queries can have dissimilar embeddings and may be routed to different LLMs.” Evidence should be provided to support this claim. **A1.** Thanks for your suggestion. Figure R1(a) in the `Rebuttal PDF` shows t-SNE visualization of training query embeddings extracted by the encoder trained by $\mathcal{L}\_\text{sample-LLM}$. As can be seen, query embeddings belonging to different tasks are roughly mixed together. We also provide two GSM8K queries as follows, which require basic calculation of shopping costs. Their embeddings have very low similarity (only $-0.4589$) when the router is trained by $\mathcal{L}\_\text{sample-LLM}$ alone. ``` Mary does her grocery shopping on Saturday. She does her shopping only at a specific store where she is allowed a credit of $100, which must be paid in full before her next shopping trip. That week she spent the full credit limit and paid $15 of it on Tuesday and $23 of it on Thursday. How much credit will Mary need to pay before her next shopping trip? ``` ``` Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet? ``` After integrating $\mathcal{L}\_\text{sample-sample}$, training query embeddings have a clear cluster structure (Figure R1(b)). Moreover, the similarity between the above queries increases to $0.9982$. We will add this discussion to the revision. --- > Q2. For the OOD setting (Table 2), RouterDC fails to beat the best individual LLMs on all tasks (e.g., 38.81 vs 39.72 on PreAlgebra). **A2.** OOD tasks are much more challenging than id-distribution (ID) tasks. Though our RouterDC fails to achieve the highest accuracy on every task, RouterDC can assemble complementary abilities of the candidate LLMs and achieve the best overall performance (an improvement of 1.90%). Besides, RouterDC performs comparably to the best candidate LLMs on all tasks (i.e., 38.81 vs 39.71 for PreAlgebra, 46.80 vs 47.34 for MBPP, and 51.93 vs 52.01 for C-EVAL). Moreover, RouterDC outperforms existing routing methods by a large margin overall. We will add this discussion to the revision. --- > Q3. RouterDC may require a large amount of labeled data to train the router. **A3.** To resolve this concern, we conducted an experiment to study the performance of RouterDC with different numbers of training samples per task. As can be seen from Figure R2 in the `Rebuttal PDF`, the testing accuracy saturates quickly, indicating that **a small number of samples is sufficient** for learning the router (e.g., 100 samples per task). Moreover, with only 30 samples per task, RouterDC already outperforms the previous SOTA overall (57.21 vs 55.77), demonstrating that our RouterDC does not require a large amount of labeled data for training. We will include the experiments and data efficiency analysis in the revision. --- > Q4. How about the performance of RouterDC without the sample-sample contrastive loss? **A4.** Thanks for the suggestion. We compare RouterDC w/ or w/o the sample-sample loss with the previous SOTA in the following tables. As can be seen, in both ID and OOD scenarios, using the sample-LLM loss alone (i.e., RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$)) performs better than the previous SOTA (with an average accuracy improvement of $0.75\\%$ in ID scenario and $0.50\\%$ in OOD scenario). We will add this discussion to the revision. \begin{array}{lcccccl} \hline \textbf{(in-distribution)} & \text{MMLU} & \text{GSM8K} & \text{CMMLU} & \text{ARC-C} & \text{HumanEval} & \text{Avg} \newline \hline \text{Preivous SOTA} & 63.33 & 66.63 & 51.77 & 57.10 & 40.00 & 55.77 \newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}) & 63.21 & 68.87 & 49.27 & 49.43 & 51.84 & 56.52 \ \text{ (+0.75)}\newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}+\mathcal{L}\_\text{sample-sample}) & 61.07 & 70.32 & 51.77 & 58.52 & 51.02 & \mathbf{58.54}\text{ (+2.77)} \newline \hline \end{array} \begin{array}{lcccl} \hline \textbf{(out-of-distribution)} & \text{Pre-Algebra} & \text{MBPP} & \text{C-EVAL} & \text{Avg} \newline \hline \text{Preivous SOTA} & 35.36 & 43.12 & 52.01 & 43.50 \newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}) & 36.51 & 47.34 & 48.14 & 44.00 \ \text{ (+0.50)}\newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}+\mathcal{L}\_\text{sample-sample}) & 38.81 & 46.80 & 51.93 & \mathbf{45.85} \text{ (+2.35)}\newline \hline \end{array} --- > Q5. In Fig. 9, it seems that no samples are routed to the Chinese-Mistral-7B model, why? **A5.** Thanks for your insightful question. We can see from Table 1 that Chinese-Mistral-7B is incapable of all tasks and has the worst overall performance, suggesting that its specialized ability may be covered by other candidate LLMs. Hence, no samples are routed to Chinese-Mistral-7B, which also verifies that RouterDC can select suitable LLMs for queries. We will add this discussion to the revision. --- > Q6. Can you visualize the embeddings of training samples extracted by the encoder $\mathcal{E}(x;w)$ of RouterDC? **A6.** Thanks for the suggestion. Figure R1(b) in the `Rebuttal PDF` shows the t-SNE visualization of training queries extracted by the learned encoder of RouterDC. As can be seen, the embeddings exhibit a clear structure.
Summary: This paper aims to improve the ability of LLMs by assembling them. The proposed method use a contrastive learning strategy. It uses a sample-LLM contrastive loss which pull the query embedding closer to the top-performed LLM embedding. It also employs a sample-sample contrastive loss, which learns the distribution of input with the help of clustering. Strengths: The contrastive learning idea is intuitive. The routing model is only 86m, which is quite small compared to the LLMs. And the method achieve better results compared with simple voting or learning the routing with a reward model. Weaknesses: The contribution of this work lies in two part, first, it uses sample-llm contrastive loss, instead of directly learning the routing as a classification; second, it use sample-sample contrastive loss to make the training more stable. However, the relation between the two is missing. I am wondering how the two parts contribute to the final performance. Is it possible to use ZOOTER with the sample-sample loss? What do you expect as the result? I think the response partially addressed my concern. So I raised my score to 5. I do understand there are hyper-parameters that one could control to affect the results, but are the proposed two contrastive losses the most important ones to be consider in this scenario? I would love to see more discussions in the paper. Technical Quality: 2 Clarity: 3 Questions for Authors: See the weakness part. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable comments. We really appreciate your efforts to help us improve our paper. We carefully addressed your concerns below and sincerely hope that our reply resolves your concerns. Please let us know if you have any follow-up questions. > Q1. the relation between the two contrastive losses is missing. I am wondering how the two parts contribute to the final performance. **A1.** Thanks for your insightful suggestions. The relation between two contrastive losses can be seen in Figure 5 of the paper, which studies the effect of hyperparameter $\lambda$ in Eq. (5) (i.e., $\mathcal{L}\_{\text{sample-LLM}} + \lambda \ \mathcal{L}\_{\text{sample-sample}}$). We can see that using two contrastive losses together (i.e., $\lambda=1$) achieves better overall performance than using the sample-LLM contrastive loss alone (i.e., $\lambda=0$). Moreover, the overall performance of RouterDC is not so sensitive to a wide range of $\lambda \in [0.5, 5]$, making it easier to choose the value of $\lambda$. To further study the contributions of two contrastive losses to the final performance, we report the detailed results for both in-distribution (ID) and out-of-distribution (OOD) scenarios in the following tables. Since the sample-LLM loss provides the supervision signal and is essential for training the router, we focus on comparing RouterDC with and without the sample-sample contrastive loss. As can be seen from the below tables, RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$ + $\mathcal{L}\_\text{sample-sample}$) averagely outperforms RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$) in both scenarios, demonstrating the usefulness of the proposed sample-sample contrastive loss. Moreover, compared with the previous SOTA, using the sample-LLM contrastive loss alone (i.e., RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$)) performs better (with an average accuracy improvement of $0.75\\%$ in the ID scenario and $0.50\\%$ in the OOD scenario), while RouterDC (w/ $\mathcal{L}\_\text{sample-LLM}$ + $\mathcal{L}\_\text{sample-sample}$) achieves better performance by a large margin of $2.77\\%$ in the ID scenario and $2.35\\%$ in the OOD scenario. We will add this discussion to the revision. \begin{array}{lcccccl} \hline \textbf{(in-distribution)} & \text{MMLU} & \text{GSM8K} & \text{CMMLU} & \text{ARC-C} & \text{HumanEval} & \text{Avg} \newline \hline \text{Preivous SOTA} & 63.33 & 66.63 & 51.77 & 57.10 & 40.00 & 55.77 \newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}) & 63.21 & 68.87 & 49.27 & 49.43 & 51.84 & 56.52 \ \text{ (+0.75)}\newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}+\mathcal{L}\_\text{sample-sample}) & 61.07 & 70.32 & 51.77 & 58.52 & 51.02 & \mathbf{58.54}\text{ (+2.77)} \newline \hline \end{array} \begin{array}{lcccl} \hline \textbf{(out-of-distribution)} & \text{Pre-Algebra} & \text{MBPP} & \text{C-EVAL} & \text{Avg} \newline \hline \text{Preivous SOTA} & 35.36 & 43.12 & 52.01 & 43.50 \newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}) & 36.51 & 47.34 & 48.14 & 44.00 \ \text{ (+0.50)}\newline \text{RouterDC } (\text{w/ } \mathcal{L}\_\text{sample-LLM}+\mathcal{L}\_\text{sample-sample}) & 38.81 & 46.80 & 51.93 & \mathbf{45.85} \text{ (+2.35)}\newline \hline \end{array} --- > Q2. Is it possible to use ZOOTER with the sample-sample loss? What do you expect as the result? **A2.** Thanks for your valuable suggestion! We conducted additional experiments to study whether the proposed sample-sample contrastive loss is useful for ZOOTER. The following tables show the testing accuracy for the ID and OOD scenarios. As can be seen, integrating $\mathcal{L}\_\text{sample-sample}$ into ZOOTER **leads to improvements** of $+1.52\\%$ and $+0.81\\%$ for ID and OOD, respectively, demonstrating that the proposed sample-sample contrastive loss is **beneficial** for ZOOTER. We will include the experiments and add this discussion to the revision, which will definitely improve our paper. \begin{array}{lcccccl} \hline \textbf{(in-distribution)} & \text{MMLU} & \text{GSM8K} & \text{CMMLU} & \text{ARC-C} & \text{HumanEval} & \text{Avg} \newline \hline \text{ZOOTER} & 60.48 & 66.69 & 45.27 & 53.13 & 44.29 & 53.97 \newline \text{ZOOTER } (\text{w/ } \mathcal{L}\_\text{sample-sample}) & 60.15 & 69.71 & 46.59 & 54.26 & 46.73 & \mathbf{55.49}\text{ (+1.52)}\newline \hline \end{array} \begin{array}{lcccl} \hline \textbf{(out-of-distribution)} & \text{Pre-Algebra} & \text{MBPP} & \text{C-EVAL} & \text{Avg} \newline \hline \text{ZOOTER} & 34.44 & 41.10 & 44.95 & 40.16 \newline \text{ZOOTER } (\text{w/ } \mathcal{L}\_\text{sample-sample}) & 36.05 & 39.84 & 47.03 & \mathbf{40.97} \ \text{ (+0.81)}\newline \hline \end{array} --- Rebuttal Comment 1.1: Title: A Gentle Reminder for Reviewer ytgM Comment: Dear Reviewer ytgM, We sincerely thank you again for your effort to improve our work. We have provided a detailed response to resolve your concerns. We would like to kindly remind the reviewer that the close date of the reviewer-author discussion is approaching. Please let us know if you have any additional questions or comments. Best, The Authors
Rebuttal 1: Rebuttal: Dear Reviewers and ACs, We sincerely thank all the reviewers and ACs for your insightful and valuable comments. We are delighted that reviewers find that: - our work **addresses a significant challenge** in LLM utilization (`Reviewer 4ok1`). - our method is **intuitive** (`Reviewer ytgM`), **novel/innovative** (`Reviewers cnpT and 4ok1`), and **practical** (`Reviewer GGL7`). - RouterDC outperforms existing routing methods (`Reviewers ytgM, cnpT, and 4ok1`) and is capable of routing appropriately (`Reviewer GGL7`). The `Rebuttal PDF` contains the figures and tables that are referred to in the response to reviewers. Best, The Authors Pdf: /pdf/48ae89bb1a3c7a2d30212bbf9d9a9cd19026b25a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Doubly Mild Generalization for Offline Reinforcement Learning
Accept (poster)
Summary: The paper proposes an algorithm to allow for some mild generalization to OOD actions in offline RL by proposing constraints on which actions to generalize on and by constraining bootstrapping signal to avoid overestimation. It includes results and ablations on common offline RL benchmarks. Strengths: 1. The idea is very simple (which is a good thing!), intuitive, and can be easily integrated into existing algorithms. I think it provides nice insight into how to think about generalization and overestimation which can be useful for others. 2. The paper does a nice job of explaining the intuition behind the method and faults with previous methods. I appreciated Table 1. 3. The paper includes results on downstream performance metrics as well as ablation studies. Weaknesses: 1. It is a bit concerning that the main results are over only 5 seeds. While it is commonly done in prior work, I don’t think that means it's correct. I’d refer authors to [1]. I think it would be nice to increase the number of seeds. 2. Related to above, the variations seem to be based on standard errors (for example, Table 2) but it has typically been documented that these are unreliable and should be avoided [1] in RL research. Instead opting for some high-confidence confidence interval would be better. I would be willing to increase the score if the results were still true with more seeds. [1] Empirical Design in Reinforcement Learning. Patterson et al. 2023. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. How do the authors think the insights in this paper can be used for off-policy evaluation where the behavior policy is unknown and we want to evaluate a fixed target policy? The control setting allows some flexibility in say , constraining what the optimal policy can be, but with evaluation this seems trickier. 2. The method seems to rely on using the empirical behavior policy to constrain the policy improvement. This seems reasonable, but the paper does not study this deeply. While this question may seem separate from the question investigated in this paper, in offline RL, it is typical for multiple policies to generate the data, which means the empirical policy is a multi-modal policy. I am curious how sensitive the proposed method would be to bad estimates in the empirical policy? At the very least I think the paper should discuss this point since the algorithm makes the assumption that such an empirical policy is easily accessible. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the paper addresses this in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you are dedicated to providing feedback on our paper and are grateful for the meaningful comments. **Q1: More random seeds and use of confidence interval in evaluation.** Thanks for the kind suggestions and this nice reference. [1] provides a comprehensive resource for how to do good experiments in RL. We have taken your advice and conducted experiments on 10 random seeds, reporting results based on a 95% confidence interval (CI). The following table shows the comparison between the newly obtained results (10seeds/95%CI) and the previously reported results (5seeds/SD). We also present the new learning curves on all the tasks in **Figures 1 and 2** of the PDF (attached to the global response). The results show that our method achieves about the same performance as under the previous evaluation criterion. In the latter revision, we will cite [1] and update the results accordingly. Table 1: Comparison of DMG under different evaluation criteria. | Dataset | DMG (5seeds/SD) | DMG (10seeds/95%CI) | | --- | --- | --- | | halfcheetah-m | **54.9$\pm$0.2** | **54.9$\pm$0.3** | | hopper-m | **100.6$\pm$1.9** | 100.5$\pm$1.0 | | walker2d-m | **92.4$\pm$2.7** | 92.0$\pm$1.2 | | halfcheetah-m-r | **51.4$\pm$0.3** | **51.4$\pm$0.4** | | hopper-m-r | 101.9$\pm$1.4 | **102.1$\pm$0.6** | | walker2d-m-r | 89.7$\pm$5.0 | **90.3$\pm$2.8** | | halfcheetah-m-e | 91.1$\pm$4.2 | **92.9$\pm$2.1** | | hopper-m-e | **110.4$\pm$3.4** | 109.0$\pm$2.6 | | walker2d-m-e | **114.4$\pm$0.7** | 113.9$\pm$1.2 | | halfcheetah-e | **95.9$\pm$0.3** | **95.9$\pm$0.2** | | hopper-e | 111.5$\pm$2.2 | **111.8$\pm$1.3** | | walker2d-e | **114.7$\pm$0.4** | 114.5$\pm$0.3 | | halfcheetah-r | **28.8$\pm$1.3** | 28.7$\pm$1.2 | | hopper-r | 20.4$\pm$10.4 | **21.6$\pm$6.6** | | walker2d-r | 4.8$\pm$2.2 | **7.7$\pm$3.0** | | locomotion total | 1182.8 | **1187.2** | | | | | | antmaze-u | **92.4$\pm$1.8** | 91.8$\pm$1.6 | | antmaze-u-d | **75.4$\pm$8.1** | 73.0$\pm$5.0 | | antmaze-m-p | 80.2$\pm$5.1 | **80.5$\pm$2.1** | | antmaze-m-d | **77.2$\pm$6.1** | 76.7$\pm$3.6 | | antmaze-l-p | 55.4$\pm$6.2 | **56.7$\pm$3.6** | | antmaze-l-d | **58.8$\pm$4.5** | 57.2$\pm$2.7 | | antmaze total | **439.4** | 435.9 | **Q2: How do the authors think the insights in this paper can be used for off-policy evaluation where the behavior policy is unknown and we want to evaluate a fixed target policy?** Thanks for this insightful question. Indeed, the evaluation setting appears trickier because the target policy is fixed. While mild action generalization no longer seems applicable in this scenario, mild generalization propagation still applies. Its direct analogue in the off-policy evaluation scenario is that: the Bellman target uses a mixture of (1) the Q-value of the action outputted by the target policy and (2) the Q-value of the nearest neighbor of that action in the dataset (or the replay buffer). This approach brings bias on the one hand, but can also be expected to have a smaller variance. An in-depth analysis of this would be an interesting direction for future work. **Q3: The method seems to rely on using the empirical behavior policy to constrain the policy improvement.** We apologize for the confusion. Actually, our method does not require training an empirical behavior policy. The implementation of policy improvement is discussed in Section 3.5 of the paper. Here, we first define a reshaped empirical behavior policy $\hat\beta^*$. Then we enforce the proximity between the trained policy and the reshaped behavior policy in Eq. (13). A special design worth noting is that we use the forward KL divergence $\mathrm{KL}(\hat\beta^*(\cdot|s) \| \pi_\phi(\cdot|s))$, which not only allows $\pi$ to select actions outside the support of $\hat\beta^*$ but also eliminate the need for pretraining an empirical behavior policy. Specifically, by substituting the analytical expression of $\hat\beta^*$ into the KL divergence and using importance sampling (the same derivation as in AWR [2]), Eq. (14) eliminates the need for pre-training a behavior model (maybe multi-modal as you mentioned) and enables direct sampling from the dataset for optimization. **Reference** [1] Patterson, Andrew, et al. "Empirical design in reinforcement learning." arXiv preprint arXiv:2304.01315 (2023). [2] Peng, Xue Bin, et al. "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning." arXiv preprint arXiv:1910.00177 (2019). --- Rebuttal Comment 1.1: Comment: Thank you for being clear in your response and addressing my concerns. Please do update the paper with the new seed and CI results. I have updated my score. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable suggestion and we will make sure to update the paper with the 10seed/95%CI results. We sincerely appreciate your time and effort in reviewing our work.
Summary: The paper introduces a novel approach called Doubly Mild Generalization (DMG) to tackle the well-known issue of over-generalization in Offline Reinforcement Learning (RL). Offline RL often suffers from extrapolation errors and value overestimation due to the generalization of value functions or policies towards out-of-distribution (OOD) actions. To address these challenges, the authors propose DMG, which comprises two key components: (i) mild action generalization and (ii) mild generalization propagation. Mild action generalization involves selecting actions in the close neighborhood of the dataset to maximize the Q values, while mild generalization propagation mitigates the propagation of potential erroneous generalization through bootstrapping without impeding the propagation of RL learning signals. Theoretically, DMG ensures better performance than in-sample optimal policies under oracle generalization scenarios and controls value overestimation even in worst-case generalization scenarios. Empirically, DMG achieves state-of-the-art performance on standard offline RL benchmarks, including Gym-MuJoCo locomotion tasks and challenging AntMaze tasks, and demonstrates strong online fine-tuning performance. Strengths: - **Innovative Approach**: The concept of doubly mild generalization is novel and addresses a significant challenge in offline RL by balancing the need for generalization with the risk of over-generalization. - **Theoretical Rigor**: The paper provides a thorough theoretical analysis of DMG, proving its advantages in both oracle and worst-case generalization scenarios. The theoretical guarantees add robustness to the proposed method. - **Empirical Performance**: DMG demonstrates superior empirical results, outperforming existing methods on multiple benchmarks. This includes both standard locomotion tasks and more complex navigation tasks, showcasing the versatility and effectiveness of the approach. - **Seamless Transition to Online Learning**: The method's ability to smoothly transition from offline to online learning, achieving strong fine-tuning performance, is a notable strength, making it practical for real-world applications where a mixture of offline data and online interactions is common. Weaknesses: - **Dependence on Continuity Assumptions**: The theoretical analysis relies on certain continuity assumptions about the Q function and transition dynamics. While these assumptions are standard, they may not hold in all practical scenarios, potentially limiting the applicability of the theoretical guarantees. - **Potential Sensitivity to Hyperparameters**: The performance of DMG might be sensitive to the choice of hyperparameters such as the mixture coefficient (λ) and penalty coefficient (ν). A more detailed analysis of hyperparameter sensitivity would be beneficial to understand the robustness of the method. Technical Quality: 3 Clarity: 4 Questions for Authors: - Can you explain why using max and min in Eq.5? It’s a bit confusing as the policy is normally a Gaussian or deterministic policy. - What about directly regularizing the KL between $\pi$ and $\hat \beta$ in Eq 13? - Comparison with more SOTA algorithms such as STR is suggested. - What does the symbol × mean in Figure 1 - It seems the performance of DMG when $\lambda=1$ in Figure 1 is inferior to that of TD3BC. I’m curious what’s the potential reason for this phenomenon as it should be better or similar to TD3BC when the in-sample term is removed. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed the limitations in the draft. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you are dedicated to providing feedback on our paper and are grateful for the meaningful comments. **Q1: Dependence on Continuity Assumptions.** Thanks for the comment. Indeed, our work relies on continuity assumptions about the learned Q function and transition dynamics. For the former, a continuous learned Q function can be relatively easily satisfied and is particularly necessary for the analysis of value function generalization. For the latter, continuous transition dynamics is also a standard assumption in the theoretical studies of RL [1,2,3,4,5]. Several previous works assume the transition to be Lipschitz continuous with respect to (w.r.t) both state and action [1,2]. In our paper, we need the Lipschitz continuity to hold only w.r.t. action. **Q2: Potential Sensitivity to Hyperparameters** Thanks for the comment. We will answer this question from two aspects: the robustness of the DMG principle and the robustness of the specific DMG algorithm. We have conducted an ablation study on the mixture coefficient $\lambda$ and penalty coefficient $\nu$ in Section 5.3. As shown in Figures 1 and 2, DMG generally achieves the best performance with $\lambda \in (0,1)$ and a moderate $\nu$. This demonstrates that our DMG principle generally outperforms previous non/full generalization propagation and non/full action generalization principles, which proves the robustness of the DMG principle. On the other hand, our specific DMG algorithm also involves less hyperparameter tuning overall compared to other algorithms. As stated in Appendix C1, we use $\lambda=0.25$ for all tasks, and use $\nu=0.5$ for Antmaze, $\nu \in \{0.1, 10\}$ for Gym locomotion ($0.1$ for medium, medium-replay, random datasets; $10$ for expert and medium-expert datasets)。This demonstrates the robustness of the specific DMG algorithm. **Q3: Can you explain why using max and min in Eq. (5)?** We apologize for the confusion. Eq. (5) defines mildly generalized policy, where $\tilde \beta$ and $\hat \beta$ can be any policy beyond Gaussian or deterministic policy. The max and min in Eq. (5) mean that for any $a_1 \sim \tilde \beta(\cdot|s)$ (i.e., from the defined mildly generalized policy), we can find $a_2 \sim \hat \beta(\cdot|s)$ (i.e., from the dataset) such that $\|a_1-a_2\| \leq \epsilon_a$. In other words, in a certain state, the distance between actions taken by $\tilde \beta$ and its closest action in the dataset is bounded by $\epsilon_a$. **Q4: What about directly regularizing the KL between  $\pi$ and $\hat \beta$ in Eq. (13)?** Thanks for the question. This is equivalent to setting the inverse temperature $\alpha$ as 0. The comparisons between DMG ($\alpha=0$) and the original DMG on Gym locomotion tasks are shown in the following table. Compared to DMG, DMG ($\alpha=0$) exhibits only a small drop in performance. Therefore, DMG is robust to this hyperparameter, and the choice of aligning $\pi$ and $\hat \beta^*$ in implementation is not a key factor in DMG's good performance. Table 1: Averaged normalized scores on Gym locomotion tasks over 5 random seeds. | Dataset | DMG ($\alpha=0$) | DMG | | --- | --- | --- | | halfcheetah-m | 53.6 | **54.9** | | hopper-m | 90.1 | **100.6** | | walker2d-m | 91.0 | **92.4** | | halfcheetah-m-r | 50.7 | **51.4** | | hopper-m-r | 97.4 | **101.9** | | walker2d-m-r | **93.3** | 89.7 | | halfcheetah-m-e | **93.3** | 91.1 | | hopper-m-e | 106.5 | **110.4** | | walker2d-m-e | 110.8 | **114.4** | | total | 786.7 | **806.8** | **Q4: Comparison with more SOTA algorithms such as STR is suggested.** Thanks for the suggestion. Due to the page limit, we report the overall performance of STR (taken from its paper) and DMG in the following table. We will include the full comparison in our latter revision. Overall, DMG achieves slightly higher performance in both Gym locomotion and Antmaze domains. Table 2: Total scores on Gym locomotion and Antmaze tasks, averaged over 5 random seeds. | Dataset | STR | DMG | | --- | --- | --- | | locomotion total | 1162.2 | **1182.8** | | antmaze total | 430.2 | **439.4** | **Q5: What does the symbol × mean in Figure 1?** Sorry for the confusion. The crosses (x) in Figure 1 mean that the value functions diverge in several seeds. **Q6: It seems the performance of DMG when $\lambda=1$ in Figure 1 is inferior to that of TD3BC.** When $\lambda=1$, DMG, just like TD3BC, allows full generalization propagation in value training. However, policy training and the specific hyperparameter configuration of DMG still differ from TD3BC. We hypothesize this may cause the discrepancy. **Reference** [1] Dufour, Francois, and Tomas Prieto-Rumeau. "Finite linear programming approximations of constrained discounted Markov decision processes." SIAM Journal on Control and Optimization 51.2 (2013): 1298-1324. [2] Dufour, Francois, and Tomas Prieto-Rumeau. "Approximation of average cost Markov decision processes using empirical distributions and concentration inequalities." Stochastics An International Journal of Probability and Stochastic Processes 87.2 (2015): 273-307. [3] Shah, Devavrat, and Qiaomin Xie. "Q-learning with nearest neighbors." Advances in Neural Information Processing Systems 31 (2018). [4] Xiong, Huaqing, et al. "Deterministic policy gradient: Convergence analysis." Uncertainty in Artificial Intelligence. PMLR, 2022. [5] Ran, Yuhang, et al. "Policy regularization with dataset constraint for offline reinforcement learning." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response that address my concerns, I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback! We sincerely appreciate the time and effort you’ve dedicated to reviewing our work.
Summary: The offline RL community has recently shown a surge of interest in in-sample offline reinforcement learning algorithms. However, these algorithms can sometimes be too restrictive, unable to leverage the generalization capability of deep neural networks. As a remedy, the authors proposed an algorithm called Doubly Mild Generalization that uses the weighted mean of the in-sample maximum and mildly generalizable maximum Q-value. Experiments conducted on various offline RL benchmarks manifest the effectiveness of the proposed algorithm. Strengths: The authors provide rigorous mathematical proofs for most of their arguments. They also provide a practical version of their algorithm, which performs very well on multiple offline RL benchmarks. Weaknesses: 1. The constants $C_1$ and $C_2$ of Theorem 1 are state-dependent. 2. Lines 120-121 are difficult to understand. How does (4) relate to the generalizability of Q functions? 3. The meaning of Assumption 1 is unclear at first glance. An explanation of what might prevent $\mathcal{T}_{\textrm{DMG}}$ from being well-defined would be helpful for the readers. Also, I think the term "well-defined" is misleading. There's no problem with "defining" the DMG operator for state-action pairs outside the dataset; it's just that they would be incorrect and not be very meaningful. ### Minor comments 1. $\alpha$ is missing from Eq. (21) 2. $k=\infty$ → $k\to\infty$ in Equations (48) and (49) Technical Quality: 4 Clarity: 3 Questions for Authors: Please refer to the **Weaknesses** section. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you are dedicated to providing feedback on our paper and are grateful for the meaningful comments. **Q1: The constants $C_1$ and $C_2$ of Theorem 1 are state-dependent.** Indeed, they are state-dependent. It is worth noting that Theorem 1 provides insight and motivates our method. **Q2: How does (4) relate to the generalizability of Q functions?** We apologize for the confusion caused by the lack of sufficient explanation. Eq. (4) is the update of the parametric Q function ($Q_{\theta} \rightarrow Q_{\theta’}$) at state-action pairs $(s,\tilde a) \notin \mathcal D$, which is exclusively caused by generalization. If $\tilde a$ is within a close neighborhood of $a$, then $C_2\|\tilde a-a\|$ is small. Moreover, as $C_1 \in [0,1]$, Eq. (4) approximates an update towards the true objective $\mathcal T_u Q_\theta(s,\tilde a)$, as if $Q_\theta(s,\tilde a)$ is updated by a true gradient step at $(s,\tilde a) \notin \mathcal D$. We will add these explanations to the latter revision. **Q3: The meaning of Assumption 1 is unclear at first glance. Rephrase the term "well-defined".** Thanks for the kind suggestion. Indeed, the term "well-defined" may not be very suitable in this context. To avoid confusion, we will delete the sentence "In other words, $\mathcal{T}_{\mathrm{DMG}}$ is well defined in the mild generalization area $\tilde \beta(a|s)>0$." in Assumption 1. In addition, we will include more explanations before line 158 as follows. Because the mild generalization area $\tilde \beta(a|s)>0$ may contain some points outside the offline dataset, $\mathcal{T}_{\mathrm{DMG}}$ might query Q values of such points. In this assumption, we assume that the generalization in the mild generalization area is correct and meaningful. The rationale for such an assumption… **Q4: Minor comments** Special thanks for your careful review and we will update them in the latter revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Theorem 1 now makes much more sense. Please add that explanation to the main paper. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable suggestions and we will make sure to add the explanations to the main paper. We sincerely appreciate the time and effort you devoted to reviewing our work.
Summary: This paper studies the problem of extrapolation error and value overestimation in Reinforcement Learning (RL). The authors exploit generalization in offline RL by using mild action generalization and mild generalization propagation. Strengths: 1. The problem studied is on interest : offline RL typically suffers from study of generalization beyond datapoints seen in the dataset. 2. Strong theoretical results complimented with rigorous experiments. I'm impressed with the number of baselines the authors have tried out. Weaknesses: Assumption 11 looks very restrictive. The scenarios to which the approach can be applied gets significantly reduced if the transition needs to be lipschitz in the action space (for ever state, state next pair). For example for robotics tasks, where transitions are determinisitc, the transition matrix will take values 0-1, mnaking the Lipschitz constant very large in this scenario. Technical Quality: 3 Clarity: 3 Questions for Authors: The idea of generalization to stata-action pairs in the neighborhood of those seen in the dataset has been studied in https://arxiv.org/pdf/2402.12570 and I urge the authors to cite and state the differences to the approach in this paper. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper has no significant limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you are dedicated to providing feedback on our paper and are grateful for the meaningful comments. **Q1: Assumption 11 looks very restrictive.** We apologize for the lack of a detailed explanation of this assumption in the paper. To our knowledge, Assumption 11 (i.e., Assumption 3) is common in the theoretical studies of RL [1,2,3,4,5]. Several previous works assume the transition to be Lipschitz continuous with respect to (w.r.t) both state and action [1,2]. In our paper, we need the Lipschitz continuity to hold only w.r.t. action. However, we also acknowledge, as you mentioned, that the assumption reduces the theoretical applicability. Despite this, DMG performs quite well empirically on those robotics tasks with deterministic dynamics. In real-world scenarios, even for deterministic dynamics, there is some noise, resulting in a narrow distribution for state transitions. In this case, the assumption still applies, although the magnitude of the Lipschitz constant may vary depending on the level of noise. **Q2: The idea of generalization to stata-action pairs in the neighborhood of those seen in the dataset has been studied in [6] and I urge the authors to cite and state the differences to the approach in this paper.** Thanks for the reference. The research work [6] is crucial in the analysis of multi-task offline RL, and we will cite it and include the following discussions. Recently, theoretical advancement [6] has explored multi-task offline RL from the perspective of representation learning and introduced a notion of neighborhood occupancy density. The neighborhood occupancy density at some $(s, a)$ in the dataset for a source task is defined as the fraction of points in the dataset within a certain distance from $(s, a)$ in the representation space. The authors use this concept to bound the representational transfer error in the downstream target task. In contrast, DMG is a wildly compatible idea in offline RL and provides insights into many offline RL methods. DMG balances the need for generalization with the risk of over-generalization in offline RL. Specifically, DMG achieves doubly mild generalization, comprising mild action generalization and mild generalization propagation. Generalization to stata-action pairs in the neighborhood of the dataset is a specific realization of mild action generalization. Additionally, in our work, the neighborhood is defined based on the action space distance. **Reference** [1] Dufour, Francois, and Tomas Prieto-Rumeau. "Finite linear programming approximations of constrained discounted Markov decision processes." SIAM Journal on Control and Optimization 51.2 (2013): 1298-1324. [2] Dufour, Francois, and Tomas Prieto-Rumeau. "Approximation of average cost Markov decision processes using empirical distributions and concentration inequalities." Stochastics An International Journal of Probability and Stochastic Processes 87.2 (2015): 273-307. [3] Shah, Devavrat, and Qiaomin Xie. "Q-learning with nearest neighbors." Advances in Neural Information Processing Systems 31 (2018). [4] Xiong, Huaqing, et al. "Deterministic policy gradient: Convergence analysis." Uncertainty in Artificial Intelligence. PMLR, 2022. [5] Ran, Yuhang, et al. "Policy regularization with dataset constraint for offline reinforcement learning." International Conference on Machine Learning. PMLR, 2023. [6] Bose, Avinandan, Simon Shaolei Du, and Maryam Fazel. "Offline multi-task transfer rl with representational penalization." arXiv preprint arXiv:2402.12570 (2024). --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. My concerns have been resolved and I've increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback! We truly appreciate your time and effort in reviewing our work.
Rebuttal 1: Rebuttal: ### **Global Response** We thank all the reviewers for taking the time to read our manuscript carefully and for providing constructive and insightful feedback. We are encouraged by the positive comments of the reviewers, such as: - Meaningful research problem, innovative approach, and nice insights (Reviewers kJjy/Kscy/iL9e); - Strong theoretical results with rigorous proofs (Reviewers kJjy/RKt1/Kscy); - Superior performance on multiple benchmarks with many baselines and several ablation studies (Reviewers kJjy/RKt1/Kscy/iL9e); - Seamless transition to online learning and strong fine-tuning performance (Reviewer Kscy). Meanwhile, we have been working hard to address the reviewers' concerns and questions and have provided detailed responses to the individual reviews below. We hope our response could address the reviewers' concerns. We would be more than happy to resolve any remaining questions in the time we have and are looking forward to engaging in a discussion. Manuscript updates: - Update the results with 10 random seeds based on [1]. - Include the discussions regarding [2]. - Add more explanations of how Eq. (4) relates to the generalizability of Q functions. - Rephrase the term "well-defined" and include more explanations of Assumption 1. - Correct other typos. Reference: [1] Patterson, Andrew, et al. "Empirical design in reinforcement learning." arXiv preprint arXiv:2304.01315 (2023). [2] Bose, Avinandan, Simon Shaolei Du, and Maryam Fazel. "Offline multi-task transfer rl with representational penalization." arXiv preprint arXiv:2402.12570 (2024). Pdf: /pdf/9d76144e3a9fddbec9662a0f4c1fdf14395c7739.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Great Minds Think Alike: The Universal Convergence Trend of Input Salience
Accept (poster)
Summary: This paper studies the resemblance of the functions obtained by training neural networks across various network depth and width. They do so by looking at the gradient field of the learned functions $f_1, f_2$, and defining their similarity by taking the cosine similarity between the gradients, then taking independent average over the inputs of the functions. They empirically observe that increasing the width results in higher similarity between the gradient fields, from which the deduce to properties: (1) the average gradient norm increase with width of the net, and (2) the direction of the average almost does not change when going to networks with larger width, but the variance decrease and the distribution becomes more concentrated. They also calculate the similarity measure and some properties of the gradient distribution assuming the distribution is gaussian or spherically symmetric. The major claim of. this work is summarized in hypothesis H1 and H2 on lines 145 and 160. Strengths: Authors formulate an interesting hypothesis based on empirical observation regarding the behavior of the gradient field of learned functions in DNNs: the direction of the vector does not change too much when we increase the size of the network. They form this hypothesis from the observation that the average dot product of the gradient fields across networks with different width increases compared to the norm of the average of the gradient field for the smaller network. Weaknesses: My major concerns: (1) The claims are not backed my sufficient reasoning and relevant experiments. Note that the similarity measure is a function of the dot product of the average normalized gradients, so seeing an increasing trend on this quantity can be the result of two things: (*) the angle between the average of the normalized gradients doesn't change or gets smaller, or (**) the size of the average itself is increasing. Note that either of (*) or (**) is sufficient to happen to see an increasing value of cosine similarity that the authors have defined. However, observing an empirical increase in the value of similarity by going to larger nets, the authors conclude both of (*) and (**) at the same time, while it might be the case that (**) happens, ie.e. the size of the average is increasing fast, so that even though the direction of the averages are getting further apart, the overal dot product is still increasing. Therefore, in order to scientifically claim that both (*) , (**), one approach is that authors study the angle between the normalized averages (or dot product of NORMALIZED averages) separately. Specially because these claims are not backed by theoretical arguements. (2) The authors discuss the 'saw distribution' and modeling the behaviour of gradients with this dsitribution assuming the actuall gradient distribution is spherically symmetric (or close to it), and assumption that seems very strong and probably incorrect for actual neural nets. Is there a particular observation for this assumption? Authors discuss some high level calculation with this distribution on line 246, but I didn't find any calculation in appendix. (3) the writing is vague and it does not flow. In particular, the story for the theoretical calculation with gaussian distribution is not explained, and the assumptions are not justified at all. The section regarding saw distribution needs a lot of changes to become mroe clear. Some minor points: - the authors claim that the fact that given networks A, B where A has larger width, they claim that the observation that the average similarity of A to B is larger than that of B to itself is 'counter-intuitive' and 'paradoxical'. This is a wrong claim and needs to be omited from the paper; the reason that it is not paradoxical is you are working with 'some measure of similarity' rather than 'distance' or 'metric', for a simple illustrative example, if you define similarity of two numbers a, b to be their product ab, then if a > b > 0, ab > b^2 but it doesn't mean that because a, b are more similar, we should conclude that a is 'closer' to b than b to itself. - In Equation (5) you are estimating the normalized average of the gradient which is used in calculating your similarity measure, but I think the quantity of interest is the normalized average of the normalized gradients (because of the definition of cosine similarity that you use)? minor comments: - at H1 hypothesis in line 145: the mean vector \mu is not defined at that point of the paper -typo in line 158 -> should be k_1 and k_2 Technical Quality: 3 Clarity: 2 Questions for Authors: What is the conclusion for training/optimization/generalization efficiency the authors look into obtain from this observation? To me the average of normalized gradient seems not to be correlated with generalization ability of neural nets. maybe the authors can add more concluding points or remarks about their empirical observation. - What is the difference between the message of Figure 3 and 5? - Are you assuming the gradient distribution is gaussian in section C? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The claim needs further be backed by more delicate experiments. Furthermore, the assumptions for calculation of the behavior of the similarity measure using certain classes of distributions needs to be justified at least in some regime of training DNNs. For the rest see weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer We4a We thank the reviewer for acknowledging our contribution. Regarding the reviewer's concerns, we clarify that there is some communication in the empirical verifications. First, we would like to answer the clarification questions: **[Eq. 5 (W4)]** We thank the reviewer for pointing out this typo! As the empirical estimation of $\mu$, $\tilde{\mu}$ is computed as the **normalized average of the normalized gradients** throughout the experiments. This is the same as how $\mu$ is defined in L152 (where $u$ is the normalized gradient). It is defined as $$\tilde{\mu}(k;x)=\frac{1}{M}\sum_{i=1}^M\frac{\nabla_x f_i(x)}{||\nabla_x f_i(x)||})/||\frac{1}{M}\sum_{i=1}^M\frac{\nabla_x f_i(x)}{||\nabla_x f_i(x)||}||\approx \mu(k;x),f_i\in\mathcal{F}(k).$$ We will revise it in the manuscript. **[Fig. 3 and 5 (Q1)]** - Fig. 3 presents the results of $\rho(k_1,k_2)$, which is **the average of the cosine similarity between two families**. This is defined in L110-112 and serves as a starting point for the two hypotheses. The increase in the diagonal motivates and verifies H1. The increase in the rows and columns motivates and (partially) verifies H2 as mentioned by the reviewer. - Fig. 5 shows the results of **the cosine similarity between the averages of the normalized gradients of two families**. This is the angle between the population means of two $\mathcal{G}$s. As also mentioned by the reviewer, this verifies H2. **[Gradient Similarities (W1)]** After clarifying the miscommunication, we now address the reviewer's major concerns: The reviewer's observation of the two possible factors is insightful. Both of them are already considered **separately** in the manuscript. (ii) is discussed in the intra-family hypothesis. As for (i), there were some miscommunications due to the typos in Eq. 5. The **angles between the average of the normalized gradients** are exactly what we used to verify H2. The results are presented in Fig. 5, with definitions in the caption as $\mathbb{E}\_{x\in\mathcal{X}}CosSim(\tilde{\mu}\_j(x;\mathcal{F}),\tilde{\mu}_{j'}(x;\mathcal{F}))$. This is the dot product of normalized averages of the normalized gradients suggested by the reviewer. The results demonstrate an extremely high cosine similarity between the means and thus provide verifications to H2. Furthermore, we reassure the reviewer with the results in Fig. 10. The mean attack is performed by the means estimated using the family of $k=160$ (L291-297). Thus if the concerned scenario is true, the mean attack would not lead to the best results when attacking other families. **[Saw Distribution (W2,W3)]** - The symmetry assumptions (W2). To study the degree of concentration of the gradients, we study the cosine similarity $t = u^T\mu$ between an individual gradient and the mean. This naturally leads to rotationally symmetric distributions since the distribution on the intersection between $S^{d-1}$ and the hyperplane does not affect the distribution of $t$. To resolve the concern, we carry out an empirical study of the distribution on the intersection (i.e. conditioned on $t$). Specifically, we train 1000 CNN models with $k=40$ and seeds 1~1000 on CIFAR-10 and compute $t$ regarding each test sample. The distribution of the first sample is visualized in Fig. R7(left). We partition the range of $t$ to 10 intervals by every 10 percent of the frequency, and inspect the **direction of the mean of the gradients in each interval**, each direction is estimated by 100 models. If these conditional mean directions are consistent with the population mean direction, then the gradients are symmetrically distributed on each $S^{d-2}$ hypersphere (R7(right)), thus verifying the rotational symmetry. We investigate the cosine similarities between the conditional and unconditional mean directions on the first 1000 samples. The 10*1000 similarity values have a mean and std at approximately 0.970 and 0.013 respectively. Thus the rotational symmetry is empirically verified. - Marginalization Derivations (W3). The marginal distribution $p_{original}$ in Sec. 3.3 is obtained by taking the marginalization of Saw distribution through integration on the hypersphere. It was not claimed as a theorem or lemma since it is not novel. We will include a detailed derivation in the appendix. **[Gaussian Assumptions (W3, Q2)]** - The Gaussian distribution described in the appendix is not related to the Saw distribution by any means. The only assumption involved in the Saw distribution is the symmetry over the $S^{d-2}$ (e.g. dashed circles in Fig. 4). - Appendix C serves as a sanity check or a null hypothesis on how to interpret the empirically observed cosine similarities. It is introduced in Sec. 3.1 L215, where the cosine similarities between the population means of different model families are studied. Fig. 5 shows that such cosine similarities are very high. However, the significance of these results may not be well interpreted given that in 2-D/3-D scenarios, this cosine similarity range may not be considered very high. Therefore, the analysis in Appendix C aims to uncover how difficult it is to achieve high cosine similarity as the dimension increases (e.g. Fig. 16). **[Generalizations & Population Means (Q0)]** It can be observed that as the model capacity increases, the distributions of models become more concentrated (illustrated in Fig. 4(a) and verified in Fig. 3,5,6, etc.). This suggests that **as a single model approaches the population mean, the testing performance also increases**. To verify this claim, we carry out the experiments on deep ensembles since they approach the population mean of the model directly instead of by increasing capacity like single models. It can be found in Fig. 8 that although approaching the population mean differently, deep ensembles and single models have similar trends. This provides support for the connection between the population means and better generalizability. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response, I hope that the authors can improve the presentation of their theoretical claims, since currently it seems hard to follow. --- Reply to Comment 1.1.1: Title: Thanks very much for your response Comment: We appreciate your comments and suggestions very much! We will revise the manuscript to improve the presentation of the theoretical claim and avoid potential miscommunications. Regarding the technical concerns, we believe that these have been adequately addressed in our responses. Therefore, with all due respect, we sincerely hope you will reconsider the assessment of our work. Kind regards, Authors
Summary: This paper studied the uncertainty introduced in stochastically optimized DNNs via the input saliency maps. By empirical evaluations, the authors discovered that 1) within the same model architecture, models with different capacities tend to align in terms of their population mean directions of the input salience, 2) the distributions of the optimized models concentrated more around their shared population mean as the capacity increases. These observations are very interesting and shed some light on the applications of DNNs such as black-box attacks. Strengths: The authors took a suitable approach to study the uncertainty introduced in stochastic training of DNNs. The observations made by them are very interesting and inspiring, and well supported by empirical evidences. The authors did a good job explaining their main discoveries using figures and equations. These findings could shed light on the practice of DNNs. Weaknesses: The main complaint I have on the paper is its presentation. Although the observations quite interesting to me, I found the writing of the paper can be improved. 1. More context is needed for Figure 1. 1.1. How are the dots plotted? I assume you did something like PCA to reduce the gradient dimension to 3 and then plot them on the sphere. 1.2. You said on line 55 that the second hypothesis is illustrated in Figure 1(b), but there is no indication of model capacities in the figure. 2. Even though the authors tried to distinguish between model architecture from model family, the introduce of width and depth complicates the discussion and makes the definition less clear. It would be good to refine the presentation here. For instance, you could give a concrete example like ResNet_depth10_width10_seed0 and ResNet_depth10_width10_seed1 has the same model architecture and belong to the same model family. ResNet_depth10_width10 and ResNet_depth10_width20 has the same model architecture and belong to different model families, etc. 3. It is unclear to me what is defined on line 96. 4. The presentation of the paragraph on line 101 and Figures 2 & 3 can be improved. 4.1. Why do you use the notation $\rho_\text{ind}$? What does ind mean? 4.2. Can you give clear, different names to $\rho_\text{ind}$ and $\rho$, and make both of them display equations? 4.3. In the captions of Figures 2 & 3, can you use the $\rho$ notations and refer to their display equations, respectively? 4.4. The x and y labels in Figures 2 & 3 should be $k_1$ and $k_2$. 5. $\mu(k; x)$ is not defined in Hypothesis I. 6. Some $k_1$ should be $k_2$ on line 158 and in the display equation below. And why do you need $k_1 > k_2$ for this? Technical Quality: 3 Clarity: 2 Questions for Authors: Besides the points I made in the Weakness section, I have one more questions. When practitioners scale their models, they usually increase both depth and width simultaneously (especially in the context of scaling law). In your experiments, you mostly studied the effect of increasing width. Could you add results in the flavor of Figures 2 & 3 where the model capacity is parametrized by both the depth and width following some scaling law relationship? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer RiRS We thank the reviewer very much for acknowledging our work! We now answer the reviewer's questions as follows to make sure that all remaining concerns are resolved. **[Figure 1 Clarifications (W1)]** We appreciate the reviewer for pointing out the potential ambiguity of Fig. 1. We clarify that Fig. 1 is for demonstration purposes and to provide a straightforward intuition for the hypotheses and the phenomena. We will update the manuscript to make sure there is no miscommunication. 1.1. The dots are plotted synthetically for a clear visualization. And 1.2. Fig. 1(b) is to illustrate the hypothesis of (i) shared means and (ii) convergence trend compared with Fig. 1(a). **[Model Architecture Clarification (W2)]** We thank the reviewer very much for the valuable point. For a better presentation, We will add examples to elaborate on how model architectures and families are defined, following the ways suggested by the reviewer. **[Definition of $\mathcal{F}$ in L96 (W3)]** L96 aims to define the space $\mathcal{F}$ of functions we focus on in this work. Note that given a fixed input dimension, all models can be seen as a function $f:\mathbb{R}^d\rightarrow\mathbb{R}$, regardless of the architecture, the capacity, etc. However, if the space of models $\mathcal{F}$ is simply defined as all such functions $\\{f\in\mathcal{C}|f:\mathbb{R}^d\rightarrow\mathbb{R}\\}$, many invalid models will be included. Therefore, we add a constraint to the functions in $\mathcal{F}$ that they fit the training distribution well. We will revise the definition to a mathematical formula like $\mathcal{F} = \\{f\in\mathcal{C}(\mathbb{R}^d)|\mathcal{L}(f;\mathcal{X}\_{train}, \mathcal{Y}\_{train}) < \epsilon \\}$. **[Definitions of $\rho,\rho_{ind}$ (W4)]** We thank the reviewer for the suggestion! It can indeed be a little confusing between the notations $\rho$ and $\rho_{ind}$. We clarify them as follows. 4.1. The subscript "ind" in the notation $\rho_{ind}$ refers to "individual". It is defined in L103 as **the cosine similarity of two models**. Note that the inputs of $\rho_{ind}$ are $f^{(1)},f^{(2)}$, which are two functions (models). Differently, $\rho(k_1,k_2)$ (defined in Eq. 1) represents **the expectation of the cosine similarity between two arbitrary models from $\mathcal{F}(k_1)$ and $\mathcal{F}(k_2)$**. It takes the width parameters $k_1,k_2$ as the input. Ideally, $\rho$ should be used to study the similarity. And we carry out the experiments by training 100 models for a given $k\in\\{10,20,40,80,160\\}$ for the four model architectures and three datasets. This already results in $100\times 5\times 4\times 3=6000$ trained models. To demonstrate the influence of the capacity $k$ in a higher resolution, we investigate $k\in\\{8, 10, 12, 14, 16, 20,\cdots, 384, 448\\}$ (L106). This is computationally infeasible for $\rho$. Therefore, we compare the results between $\rho_{ind}$ and $\rho$ in Fig. 6 to verify that **$\rho_{ind}$ serves as a computationally efficient surrogate for $\rho$**. 4.2. We will term $\rho_{ind}$ and $\rho$ clearly as "individual similarity" and "average similarity" to avoid ambiguity. And as suggested by the reviewer, we will move the definition of $\rho_{ind}$ (L103) and $\rho$ (Eq. 1) together for a better presentation. 4.3. Originally, Fig. 2 serves as the purpose of motivating the problem and appears before the definition of $\rho, \rho_{ind}$. Thus $\rho_{ind}$ was not used in the caption. As suggested by the reviewer, we will re-arrange the figure and text to include $\rho_{ind}$ in the caption of Fig. 2 for a better presentation. 4.4. We appreciate the reviewer for pointing this out. We will revise the labels in Fig. 2 and 3. **[$\mathbf{\mu}(k;\mathbf{x})$ in H1 (W5)]** Thanks for pointing this out! $\mathbf{\mu}(k;\mathbf{x})$ refers to the normalized population mean of the gradient, which is defined in L152. We will re-arrange this part and move the definition to "The Intra-Family Hypothesis." for a better presentation. **[$k_1,k_2$ in L158]** Note that in L158, we are discussing the cross-family similarity regarding the population mean of each family. Since it is studied through the cosine similarity, it is symmetric between $k_1$ and $k_2$. Note that for cross-family similarity, $k_1\ne k_2$. Therefore, the condition $k_1>k_2$ is equivalent to the condition $k_1\ne k_2$ in H2. We will revise the manuscript to make it consistent throughout the discussion. **[Depths and Widths (Q1)]** We appreciate the valuable suggestion from the reviewer. We acknowledge that the depth is also a factor that affects the model capacity. However, since the depth change usually leads to inconsistency in widths, we study the influence of depths by the -small and -large suffixes in the manuscript. To address the reviewer's question regarding the influence of depths completely, we carry out additional experiments with different settings so that depths and widths can be altered independently by parameters $d$ and $k$, respectively. $d$ determinse the number of layers, and $k$ determines the width of the layers. Unlike the gradually increasing channel (e.g. $[k,2k,4k,8k]$) in the manuscript, here we set the same number of channels for all layers. In this way, the change in the number of layers does not affect the widths anymore. For example, when $d=3$, the layers are $[k, k, k]$. The results are shown in Fig. R1 of the attached PDF file. It can be found that (1) Given a fixed depth or width, the influence of the other factor is similar when scaled up. (2) Depths are slightly different from widths. Larger widths lead to higher similarities, while closer structures in depths have higher similarities. This is verified by higher similarities near the diagonal entries for Fig. R1 (Right) compared with Fig. R1 (Left). --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for their detailed response. I have no more questions.
Summary: This paper studies the distribution of input saliency maps of trained neural networks at varying depth and width. The authors observe that as the model capacity increases, these distributions converge towards and become concentrated around a shared mean direction. The authors also state two hypotheses for the behavior of these distributions and provide empirical evidence for them. Strengths: - The paper clearly formulates and discusses two hypotheses (H1,H2) that help capture the empirical observations. - While the experiments are small scale, they are sufficient to support the authors main claims and observations. Weaknesses: - The authors do not distinguish between the stochasticity introduced by random initialization and the stochasticity introduced by optimization. In Appendix A, they discuss the similarity scores at initialization, but a more interesting ablation would be to compare fixing the initialization and then randomizing the batches with fixing the ordering of the batches but randomizing the initialization. - Related to the above point, there is no discussion of the effect of the learning rate or batch size which should greatly affect the final results. This would help disentangle the randomness introduced by initialization vs optimization. Technical Quality: 3 Clarity: 2 Questions for Authors: - Have the authors explored the effects of different optimization hyperparameters (e.g. learning rate, batch size) or optimization algorithms (e.g. Adam) on the convergence trends observed in the paper? - The experiments in this paper are focused on computer vision. Do the authors expect similar conclusions to hold in other domains as well (e.g. NLP)? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer fWRt We appreciate the reviewer very much for the acknowledgment of our work. To further strengthen the reviewer's confidence in our findings, we address the reviewer's remaining questions as follows. **[Fixed Initializations with Random (W1)]** We thank the reviewer for the suggestion of separating the stochasticity of random initialization of the model weights and the stochasticity of the randomization of batch orders in SGD. In order to address the reviewer's concern, we carry out additional experiments, where models are all initialized with the same weights by setting `seed = 0` before the initialization. After the initialization, we set again manually to 1~100 in the training process. The experiments are carried out using CNNSmall models with $k=20$ on CIFAR-10. *It should be noted that this is actually equivalent to uniformly sampling from the original distributions. Therefore, we expect an almost identical distribution compared with the models used in the manuscript, where seeds are determined at the beginning of all steps.* Since the normalized gradients $\mathbf{u}$ are of 3072 dimensions, it is infeasible to compare the distribution directly. To verify the uniformity, we inspect $t = \mathbf{u}^T\mathbf{\mu}(\mathbf{x})$ described in L177-190 of the manuscript. This marginal distribution is 1-D on $[-1,1]$ and we study the Wasserstein distance between the distribution of models with identical (random) initialization and the distribution of models with different (random) initializations. The distance is approximately $0.0225$, which means that the distributions are very close in terms of the dispersion degree. **[Other Sources of Stochasticity (W2, Q1)]** We agree with the reviewer that different training criteria such as the learning rates, the batch sizes, the solvers, etc. can affect the resulting models. However, it should be noted that a criterion determines a distribution of trained models. And the phenomenon discovered in this work holds for all kinds of criteria instead of the one used in the manuscript. To completely resolve the reviewer's concern, we carry out additional experiments to investigate the influence of different training options, including learning rates, batch sizes, solvers, and data augmentations. The experiments are described in the global response in detail, and the results are presented in the attached pdf file. **[Other Data Types]** We thank the reviewer very much for the question regarding other data types. We clarify that text data is not commonly studied in the literature on benign overfitting. Additionally, conventional NLP training is typically conducted in an unsupervised manner, making it difficult to carry out similar experiments. However, since NLP data are now often trained using transformers, we investigate whether the transformer and attention mechanism exhibit this property as a starting point. Specifically, we train vision transformer (ViT) models on CIFAR-10 with varying capacities controlled by $k\in \\{10, 20, 40, 80\\}$. CIFAR-10 has an input size of $32\times 32$ pixels, thus we set patch size to be $4\times 4$, resulting in $8\times 8$ patches. We set the embedding size to $4k$, divided by $k/2$ heads, and we set the depth to $8$. We then vary seeds in 1~100 and train 100 models of each $k$ and study the mean of the similarity $\rho(k_1,k_2)$ (i.e. the same experiments as Fig. 3 in the manuscript) and the similarity of the population mean (i.e. the same as Fig. 5 in the manuscript). The results are shown in Fig. R6. It can be observed that although distinct from convolutional layers, the transformer structure also has the discovered convergence trend. It can also be noted that the degree of dispersion of ViTs is much higher than CNNs. Therefore, based on the additional experiments on transformers, we can deduce that the convergence trend may also hold for NLP tasks where transformers are extensively used. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions and concerns. I still have a few remaining questions but in response to the rebuttal, I have raised my score. > It should be noted that this is actually equivalent to uniformly sampling from the original distributions. Therefore, we expect an almost identical distribution compared with the models used in the manuscript, where seeds are determined at the beginning of all steps. I agree that each individual run has the same distribution, but the runs are now correlated. As an extreme, if I match both the initialization and batch-sampling random seeds, the runs will again also be identically distributed but will be fully correlated. My question was specifically whether the differences between the saliency maps of models at finite size were caused by the stochasticity from initialization, or from the randomness in the batches. For example, it seems a priori possible that for each initialization, the distribution has a *different mean* (depending on the initialization) and a *smaller variance,* but then when you randomize the initialization, these distributions are mixed together creating an overall distribution with a new mean and a *larger variance.* > [Other Sources of Stochasticity (W2, Q1)] I thank the reviewer for the additional experiments, and they are a valuable contribution to the overall story. It appears that the batch size makes little to no difference, but the learning rate and optimization algorithm make a fairly large difference (which is to be expected). Indeed, the first observation suggests that the randomness is almost entirely due to the random initialization. I think it would be extremely interesting to include a cosine similarity plot (width x width) where the first distribution uses a fixed initialization (seed=0) and a random ordering of the batches, and the second distribution uses another fixed initialization (seed=1) and a random ordering of the batches. This could also be flipped so that you fix two orderings of the batches and then randomize over the initializations for each distribution. This would help disentangle these two effects. --- Reply to Comment 1.1.1: Title: Thank you for your positive feedback Comment: We appreciate the reviewer for the timely response to the rebuttal. Regarding the reviewer's remaining questions about the random initializations, we carry out additional experiments accordingly. We start by formalizing this problem for clarity: **Problem Setup:** Given a training scheme and model family $\mathcal{F}(k)$, the training procedure leads to a distribution of trained models $p(f)$. As we agreed, when the initialization is fixed to $\theta$, it is the training procedure is essentially sampling from the conditional distribution $p(f|\theta)$ instead of the unconditional distribution $p(f)$. **Original Approach:** In the original rebuttal, we studied the Wasserstein distance between the unconditional distribution $p(f)$ and the conditional distribution $p(f|\theta_0)$. The latter one is achieved by fixing the initialization with `seed=0` and changing the seed from 1 to 100 in the training procedure. We found out that the distributions of $t$ is distributed very similarly with or without the conditions. **Comparison between Conditional Distributions:** As suggested by the reviewer, we focus on two conditional distributions $p(f|\theta_0)$ and $p(f|\theta_1)$, where $\theta_1$ represents the initializations under `seed=1`. Other settings are identical to $p(f|\theta_0)$. We thus have $f^0_1,\cdots,f^0_{100}\sim p(f|\theta_0)$ and $f^1_1,\cdots,f^1_{100}\sim p(f|\theta_1)$. The superscript indicates the initialization seeds and the subscript indicates the training seeds. - **Individual Similarity** First, we notice immediately that the training seeds for both $\theta_0$ and $\theta_1$ are 1~100. This means that $\forall i, f^0_i, f^1_i$ differ only in initializations. We inspect (a) $\rho_{ind}(f^0_i, f^1_i)$ (100 pairs) to see if they have exceptional similarity compared with (b) $\rho_{ind}(f^0_i, f^1_j), i\ne j$ ($\binom{100}{2}=4950$ pairs). Besides, within the same condition, all models only differ in terms of the orders of the training batch. We thus also inspect the similarity of all models of the same condition: (c) $\rho_{ind}(f^0_i,f^0_j), i\ne j$ and (d) $\rho_{ind}(f^1_i,f^1_j), i\ne j$. Each of them has $\binom{100}{2}=4950$ pairs. As demonstrated in Tab. R1, (i) the comparison between (a)(b) indicates that with different inializations, the same order of batches in the training procedure does not contribute to higher similarities. (ii) The comparison among (b)(c)(d) indicates that the same initialization indeed leads to higher similarity even though the order of batches is distinct. This corresponds to the reviewer's intuition. It should be noted that the contributions of batch orders and initializations are also affected by the number of epochs. Intuitively, more training epochs should lead to smaller contributions from the initializations but greater contributions from the batch orders. We will add experiments in the manuscript to explore these factors. Items|(a) Diff. Init. Same Order|(b) Diff. Init, Diff. Order|(c) Same Init. $\theta_0$ Diff Order|(d) Same Init. $\theta_1$ Diff Order -|-|-|-|-| \# of pairs|100|4095|4095|4095| mean of $\rho_{ind}$|0.0758|0.0753|0.0879|0.0855| std. of $\rho_{ind}$|0.0038|0.0037|0.0042|0.0048| **Table R1:** The comparison between the similarities between single models of different criteria. [Continued in the next block]
Summary: This paper investigates the distribution of trained deep neural networks through the lens of input saliency maps. It empirically shows that as the capacity of either of two stochastically optimized models increases, they tend to resemble each other more. Therefore, the authors hypothesize that within the same architecture (Resnet or CNN), different variants align in their input salience directions, and converge towards a shared population mean as their capacity increases. Furthermore, they propose a semi-parametric model, based on the Saw distribution, to capture this convergence trend. These findings enhance the understanding of various applications such as black-box attacks and deep ensembles. Strengths: 1. The paper is well-written. 2. The experiments are comprehensive and well-designed. 3. The input saliency maps perspective on the ensemble problem is novel. Weaknesses: 1. The primary finding is not entirely novel. Previous research has demonstrated that the performance of neural network ensembles converges to that of a single model as the model size increases [1,2]. Similarly, as the size of the models increases, the variance in predictions and disagreement among ensemble members diminishes [1]. 2. Several details about the experimental setup are omitted. These include the optimizer used, learning rate, learning rate schedule, batch size, number of iterations, weight decay, and data augmentation techniques. 3. The choice of optimizers and hyperparameters can significantly alter the distribution of trained models. However, the study lacks an analysis of their findings across different optimizers, hyperparameters and sources of stochasticity. 4. The experiments are conducted in small-scale settings, which may limit the generalizability of the results. [1] Geiger, Mario, et al. "Scaling description of generalization with number of parameters in deep learning." _Journal of Statistical Mechanics: Theory and Experiment_ 2020.2 (2020): 023401. [2] Kobayashi, Seijin, Johannes von Oswald, and Benjamin F. Grewe. "On the reversed bias-variance tradeoff in deep ensembles." ICML, 2021. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Have the authors conducted experiments to validate their findings across different optimizers and related hyperparameters? 2. Stochasticity can be introduced in the training of Neural Networks in various ways. Changing the initialization through different seeds is one possibility but other possibilities include for example training on different subsets of the data, large or small batch sizes, large or small learning rates, data augmentation or label noise. Have the authors validated their findings for different sources of stochasticity? 3. In Figure 8, why does the Large ResNet exhibit a higher test loss than the small CNN? 4. The explanation for the effectiveness of deep ensembles suggests that they better approximate the population mean. However, this assumes that the mean corresponds to a good generalizing model. Could the authors provide evidence or an explanation to support this assumption? 5. The introduction mentions "input salience" several times without providing a definition. For better clarity and understanding, it would be helpful to include a definition of "input salience" in the introduction. 6. In section 2.1, the authors state, "We focus on $f: \mathbb{R}^d \to \mathbb{R}$ which predicts the logit specifically for the targeted class." Could the authors clarify this statement? Does it mean that the analysis after training considers only the logit corresponding to the true class? 7. In section 2.2, $\mathbf{u}_1$ and $\mathbf{u}_2$ are sometimes used to represent different samples from the same set $\mathcal{G}_k$, and sometimes from different sets. It would be better to use a different notation to differentiate between the two cases. 8. In Hypothesis I, line 147, the term $\mu(k;x)$ is only introduced later in the text. 9. In line 234, could the authors clarify the definition of $\rho_{ind}(f,f;x)$ and explain its relationship with $\rho(k,k;x)$ ? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitations of the study are correctly addressed in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer YpLK We thank the reviewer for the invaluable feedback. We address the questions as follows. **[Novelty (W1)]** We summarize the work in [5,6] and how the novel contributions of our work on deep ensembles differ from them. It should also be noted that exploring the mechanism of deep ensembles is not our main contribution, but an application and verification of our work. - [5] studies the failure modes of deep ensembles and. However, [5] focuses on empirical observations of the performance of ensembles. This is never claimed as our contribution. Note that the comparison between the **performance** of single models and ensembles is not novel and can be found in many works discussed in our manuscript (e.g. [7]). However, the reason behind deep ensembles' performance gain is still mysterious [5]. Our work hypothesizes and verifies that (1) single models' performance gain is related to the convergence trend, and (2) the convergence is towards the shared mean of **all capacities**. This provides a novel perspective to understand deep ensembles and why an ensemble of many *small* models can achieve the performance of a single *large* model. - [6] studies the double descent phase of DNNs using NTKs. The analysis based on NTK verifies similar results for deep ensembles as in [5]. As a result, the novelties of our work discussed above still hold. The similarity of mechanisms between models with different capacities and how the convergence is towards the shared population means across capacities are also novel compared with existing work. Additionally, NTK is a powerful tool yet with limitations and strong assumptions. The analysis is on the global loss regarding the data distribution (Eq. 4 in [6]). However, our results are on the direct prediction of any single input and its neighbors. We will add these works to the related work section for a more comprehensive review of existing work. **[Experiment Setups (W2)]** To stay consistent across experiments, we follow the setups of [1] as claimed in L195-197. We use SGD solver with a learning rate $\gamma = 0.1/\sqrt{1+epoch}$. No weight decay, momentum, or data augmentation are included to minimize variants. The batch size is 128. All models are trained for 200 epochs. **[Sources of Stochasticity (W3, Q1, Q2)]** Training schemes do not affect the discovery. Please refer to the global response for details, including comprehensive additional experiments. **[Scale of Experiments (W4)]** This work is mostly based on the benign overfitting and double descent phenomena (e.g. [1-4]), where large models counterintuitively do not exacerbate overfitting. We aim to provide insights into this mysterious capability of DNNs from the perspective of XAI. Hence the settings of our work are mostly based on existing studies on this topic, too. The complexity of the datasets used in our experiments is a strength, not a weakness. Unlike existing works, which mainly use CIFAR (e.g. [1]), we test our hypotheses on TinyImageNet, which includes 200 classes and 110k samples, offering a more complex and realistic data distribution. The same trends observed in CIFAR datasets are also evident in TinyImageNet, suggesting these trends will hold for even more complex datasets like ILSVRC. We admit that carrying the experiments over ILSVRC can be infeasible for the studies of benign overfitting. As explained in Appendix B, with $k=64$, ResNetLarge is already equivalent to ResNet-18. We scale $k$ up to 448, which makes it impossible to train a group of models to estimate the empirical mean. For example, in [4], a study of overfitting carried out experiments on CIFAR and SVHN. **[Deep Ensemles and Single Models (Q4)]** As the capacity increases, the distribution becomes more concentrated (illustrated in Fig. 4(a), verified in Fig. 3,5,6, etc.). This suggests that *as a model approaches the population mean, the performance increases*. To verify this, we carry out the experiments on deep ensembles since they approach the population mean of the model directly instead of by increasing capacity like single models. Fig. 8 shows that although approaching the mean differently, deep ensembles and single models have similar trends. This provides support for the connection between the population means and better generalizability. **[Clarifications]** - Q3 [ResNet]. ResNets have higher expressiveness compared with CNNs and rely heavily on data augmentations. Since model performance is beyond our scope, data augmentation is not included. Fig. R5 shows additional results of ResNets with data augmentations. It improves the performance of ResNets greatly. However, the convergence trend is not affected. NLL is also affected by confidence. ResNets tend to be more confident in wrong predictions, leading to higher losses. - Q5 [Input Salience]. Input salience is usually used by the XAI community to refer to the input gradient as a saliency map, and also used to refer to attribution-based methods. We will formally introduce the definition of the concept. - Q6 [Logit]. Logits refer to the prediction of the target class before softmax. We also include the results of the post-softmax output in Appendix A. It shows that this does not affect the results. Note that taking the gradient of the output probability of the target class is equivalent to the gradient regarding the negative log-likelihood, normalized by the probability. - Q7 [$u_1,u_2$]. We will use $u_1,u_2$ to refer to gradients of different sets and $u,v$ to refer to those of the same sets. - Q8 [$\mu(k;x)$]. We will move the definition of $\mu(k;x)$ to H1 for a better presentation. - Q9 [$\rho_{ind}$]. $\rho$ is defined in eq. (1). It refers to the similarity between models of capacity $k$ and is approximated by taking the average over 100 models each. $\rho_{ind}$ refers to the model-dependent similarity computed regarding the two models (in L233). We will define it more clearly to avoid ambiguity. --- Rebuttal Comment 1.1: Comment: Thank you for the provided clarifications and additional experiments. However, I still have the following concerns: 1. Based on your explanation, if I understand correctly, the hypothesis is that the population mean is consistent across different capacities for a given architecture and that both ensembling and increasing capacity are methods to approach this population mean (that is why ensembling works). Could you clarify how your experiments support the conclusion that both methods (ensembling and increasing capacity) converge to the same population mean? Is it only based on the fact that they share similar scaling for the test Loss? Have the authors tried to measure similarities between the ensemble output and single but larger models? 1. I still have some concerns regarding the experimental setup. For instance, in the newly provided experiments, specifically in Figure R5 (top right, there seems to be a mismatch between the plot title and the y-axis label) it appears that the maximum test accuracy on CIFAR-10 with ResNet, across all widths, is capped at 0.5, which is highly unusual. Similarly, in the experiments comparing Adam and SGD, it seems that Adam achieves better test error compared to SGD, which is generally unexpected see for example [1]. Could you provide further clarifications? [1] Wilson, Ashia C., et al. "The marginal value of adaptive gradient methods in machine learning." Advances in neural information processing systems 30 (2017). --- Reply to Comment 1.1.1: Title: Thank you for your valuable comments Comment: We appreciate the reviewer very much for your comments. This allows us to resolve the reviewer's remaining concerns further. **[Single Models & Ensembles]** > The hypothesis is that the population mean is consistent across different capacities for a given architecture and that both ensembling and increasing capacity are methods to approach this population mean Yes, this understanding regarding the hypothesis about ensembles is correct. > Could you clarify how your experiments support the conclusion that both methods (ensembling and increasing capacity) converge to the same population mean? - **Ensemples**: Note that for homogenuous ensembles, all enesemble members $f^{(1)},\cdots,f^{(M)}\in\mathcal{F}(k)$ are from the same family (i.e. same capacity $k$). Therefore, their mean is by definition the approximation to the population mean of the entire family. Note that the error of such an approximation is inversely controlled by the number $M$ of ensemble members, we train $M=100$ ensemble members to minimize the error as much as possible. Therefore, the following statement > the population mean is consistent across different capacities is verified by comparing the approximated population mean of different families $\mathcal{F}(k_1), \mathcal{F}(k_2)$. The results are shown in Fig. 5 of the manuscript. It can be observed that the cosine similarity even reaches 0.9, which is significant given the curse of dimensionality. - **Single Models**: Unlike ensembles, single models $f\in\mathcal{F}(k)$ approach the population mean as $k$ increases. This is demonstrated in our experiments. For example, Fig. 3 shows the expected similarity of two randomly picked models of capacity $k_1,k_2$. Larger $k$s lead to higher expected similarity between single models. This suggests a decreasing dispersion as $k$ increases. Therefore, the following statement is verified. > both ensembling and increasing capacity are methods to approach this population mean For ensembles, we have explained above that it approaches the population mean by definition. As for single models, they approach the population mean as $k$ increases due to the decreasing dispersion. - **Performance**: Now that we have established that *both ensembles and single models can approach the population mean by increasing $M$ and $k$ respectively*, we carry out experiments in Fig. 8 to demonstrate **whether the distance to the population mean is related to the performance**. Here we also purposely align the x,y-axes of the two subfigures. It is observed that even though ensembles and single models approach the population mean differently, the testing performance (y-axis) is related to the dispersion degree (x-axis) in the same pattern. **[Clarification on Fig. R5]** We sincerely appreciate the reviewer for pointing this out so that we can clarify the typos of this figure. First, in the right subfigure of Fig. R5, the title should be "Loss" for the left column and "Accuracy" for the right column. As for the accuracy, we clarify that the top row should be the results of **CIFAR-100**, and the bottom row should be the results of **CIFAR-10**. Therefore, the ~50% accuracy is achieved by ResNet trained on CIFAR-100 without data augmentations. As for CIFAR-10 (bottom row), a ResNet trained from scratch with data augmentation achieves an accuracy of ~90%, which is a standard result. We apologize for the typo and the confusion. [Continued in the next block]
Rebuttal 1: Rebuttal: ## Global Response to all Reviewers We appreciate the reviewers very much for the invaluable feedback and insightful suggestions! First, we address the questions shared by reviewers as follows. All images are shown in the attached PDF file, indexed by the prefix `R` (e.g. Fig. R1). ##Different Sources of Stochasticity This work aims to reveal the convergence trend of the distribution of model behaviors under the stochasticity of the training criterion. This does not limit the conclusion to the specific criterion used in the manuscript. Distinct training criteria can lead to different distributions of trained models. But these different distributions of trained models **all** satisfy the revealed trend. **Training Details in the Manuscript** For the sake of consistency, in this work, we follow the training criterion suggested in [1]. Specifically, stochastic gradient descent (SGD) is used as the solver, with a batch size of 128. The input data are normalized, but not augmented. We start with the initial learning rate $\gamma_0=0.1$ and update it with $\gamma_t=\gamma_0/\sqrt{1+t}$, where $t$ is the epoch. **Additional Results** To resolve the reviewer's concern, we present additional experiments to investigate (1) depths and widths; (2) learning rates; (3) batch sizes; (4) solvers. The results are shown in the attached pdf file. The main results should be the similarity *within* each criterion. But we also present the cross-criterion similarity results (e.g. between models trained using different learning rates/solvers/batch sizes/etc.). We elaborate on them more as follows - **Depths and widths**: The scale of depths is not as straightforward as width since modifying depths may change widths as well. Therefore, in the manuscript we study the influence of depth by setting -small and -large variations (L809). Here we present additional results that study the influence of depths continuously, with 1~5 layers, each of which is followed by a maxpool layer with stride 2. Finally, an adaptive pooling layer is appended at the end. To rule out the influence of widths (channels), all layers have the same width, determined by $k$. e.g., For the 4-layer scenario, the intermediate layers have widths [k, k, k, k] instead of [k,2k,4k,8k] in the manuscript. The results are shown in Fig R1. It can be found that (1) Given a fixed depth or width, the influence of the other factor is similar when scaled up. (2) Depths are slightly different from widths. Larger widths lead to higher similarities, while closer structures in depths have higher similarities. For widths (left), the similarity always increases left-to-right and top-to-bottom. But for depths (right), pairs near the diagonal have higher similarities. - **Batch Sizes**: We investigate the influence of batch sizes, varying in {64, 128, 256, 512}. The results are shown in Fig. R2. It can be observed that although different batch sizes lead to different performance (e.g. testing accuracy), the convergence trend holds in all scenarios. - **Learning Rates**: We test different learning rates on how they affect the results. We include {1e-1, 1e-2, default}, where "default" refers to the criterion used in the manuscript. As shown in Fig. R3, the revealed trend is preserved in all learning rates. It is also worth noticing that learning rates affect ResNets more than CNNs. - **Solvers**: Apart from SGD, we include Adam, AdamW, and SGD w/ momentum. For Adam and AdamW we set the learning rate to 1e-3, while SGD w/ momentum uses a learning rate of 1e-1 with a momentum of 0.9. The results are shown in Fig. R4. Although different solvers lead to models of different performances, they all preserve the same convergence trend. - Note that as studied in Fig. 6 of the manuscript, $\rho_{ind}$ can be a computationally efficient compromise to $\rho$. Therefore, we studied $\rho_{ind}$ in these additional experiments. Therefore, for the similarity of the same criterion, we only plot the upper triangular part of the similarity maps. In conclusion, although training schemes can affect the resulting distributions of models, the influence of the model capacity stays invariant across different criteria. We will include the discussions regarding these variants of training criteria in the manuscript. ### **References** **(The references of individual responses are also listed here)** [1] Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., and Sutskever, I. (2021). Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124003. [2] Cao, Y., Chen, Z., Belkin, M., & Gu, Q. (2022). Benign overfitting in two-layer convolutional neural networks. Advances in neural information processing systems, 35, 25237-25250. [3] Li, Z., Zhou, Z. H., & Gretton, A. (2021). Towards an understanding of benign overfitting in neural networks. arXiv preprint arXiv:2106.03212. [4] Mallinar, N., Simon, J. B., Abedsoltan, A., Pandit, P., Belkin, M., & Nakkiran, P. (2022). Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting. arXiv e-prints, arXiv-2207. [5] Geiger, M., Jacot, A., Spigler, S., Gabriel, F., Sagun, L., d’Ascoli, S., ... & Wyart, M. (2020). Scaling description of generalization with number of parameters in deep learning. Journal of Statistical Mechanics: Theory and Experiment, 2020(2), 023401. [6] Kobayashi, S., von Oswald, J., & Grewe, B. F. (2021, July). On the reversed bias-variance tradeoff in deep ensembles. ICML. Pdf: /pdf/dfcaa8a3b6af1eb40b8e8e4f2dc91df55991a957.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Social Welfare Functions
Accept (spotlight)
Summary: The authors study the learnability of social welfare functions given decisions data by a central decision-maker that is taking into account their constituents' welfare. They discuss PAC bounds according to a number of settings with a focus on weighted power mean functions. These settings include cardinal utility vectors under a target social welfare functions and pairwise comparisons between utility vectors. Learning is taken either without noise, with iid noise, or logistic noise. The authors validate their theoretical findings by learning welfare functions on proprietary data by Lee et. al. (2018). Strengths: Interesting novel concept and quality results. This was a pleasure to read and will be a useful contribution to the research community. Weaknesses: I'd be interested if you could discuss some implications or interpretations of your work. You obtain PAC bounds for the various settings and you summarize your results in Table 1. I am not immediately sure what the quality of these results are, how they compare to prior work, and how they would be represented in real-world learning. Minor: - Perhaps include a short primer on VC dimension, pseudo-dimension, Rademacher complexity, and PAC learning for unfamiliar audiences in the appendix - Perhaps cite (Xia, AAMAS 2013) and related papers on preference/rank learning (e.g., (Zhao, Liu, and Xia, IJCAI 2022) or (Newman, Royal Society 2022) or (Conitzer and Sandholm, UAI 2005) or (Xia, Conitzer, and Lang, AAMAS 2010)) as related work - Line 142: "welfare" not "malfare" Technical Quality: 4 Clarity: 4 Questions for Authors: NA Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I'd be interested if you could discuss some implications or interpretations of your work. You obtain PAC bounds for the various settings and you summarize your results in Table 1. I am not immediately sure what the quality of these results are, how they compare to prior work, and how they would be represented in real-world learning. Our PAC bounds provide theoretical guarantees on the sample complexity of learning social welfare functions, demonstrating that accurate learning is possible with a reasonable number of samples. These bounds are the first of their kind, as prior work [1] focused on more restricted classes and did not provide finite-sample guarantees. There is plethora of other work on preference learning and ranking, including the papers the reviewer suggested, but these don't focus on learning social welfare functions to the best of our knowledge. We showcase the practical feasibility of our approach on a real-world dataset of food allocation decisions [2], highlighting its potential to uncover valuable insights about decision-making priorities. Learning social welfare functions from data can significantly impact society by enabling us to understand a decision maker's priorities and notions of fairness, ultimately helping to identify biases and inform the design of improved policies. However, we recognize the crucial ethical considerations surrounding the deployment of such learning systems, particularly the risks associated with model misspecification. Our focus on the axiomatically-justified family of weighted power means mitigates these risks, but careful validation remains essential. We will incorporate a more detailed discussion of these implications and expand on the connections to related work on preference learning and decision theory in the revised paper. > Perhaps include a short primer on VC dimension, pseudo-dimension, Rademacher complexity, and PAC learning for unfamiliar audiences in the appendix. We will add a primer on key learning-theoretic concepts to the appendix, making the paper more accessible to a broader audience. > Perhaps cite (Xia, AAMAS 2013) and related papers on preference/rank learning (e.g., (Zhao, Liu, and Xia, IJCAI 2022) or (Newman, Royal Society 2022) or (Conitzer and Sandholm, UAI 2005) or (Xia, Conitzer, and Lang, AAMAS 2010)) as related work Thank you for suggesting these papers, which we're familiar with; we're happy to discuss them. Here's a quick comparison: [3] provides a high-level overview of methods for designing novel social choice mechanisms. It identifies challenges and develops a three-stage workflow to combine machine learning methods and social choice axioms. While their paper provides prescriptions for learning social choice functions, they do not study the complexity of learning these functions from data. Our paper focuses on a particular class of social *welfare* functions and demonstrates polynomial sample complexity and efficient learnability in practice. [4] studies random utility models (RUMs) with 3 modifications, with a special focus on Plackett-Luce (PL) models. It presents a sufficient condition and an EM algorithm for MLE. This work concerns preference learning and models individual utilities for agents. By contrast, our work models how these individual utilities are combined to generate social welfare, and this problem has a different axiomatic basis which gives rise to a different function family. We note that preference learning and social welfare learning can be a part of the same pipeline, with the former modeling utilities and the latter using these utilities to model social welfare. [5] addresses the problem of ranking entities through pairwise comparisons, where comparisons can contradict each other. It gives an EM algorithm using a BTL model modified to capture different types of comparisons. However, this work develops a ranking for the current set of alternatives, whereas our goal is to learn the social welfare function, which can be used for completely new comparisons in the ordinal case. [6] explores which common voting rules have a noise model which makes the rule an MLE, establishing results for various popular voting rules. [7] extends this work further by considering aggregation of multi-issue voting, using CP-nets to represent how issues depend on each other. Both papers mention two perspectives for social choice: 1) agents' diverse preferences are the basis for joint decisions, 2) there are correct joint decisions with agents having different perceptions, and thus each agent has a noisy version of this correct joint decision. While both papers adopt the second perspective, our work is aligned with the first. [1] A. D. Procaccia, A. Zohar, Y. Peleg, and J. S. Rosenschein. The learnability of voting rules. Artificial Intelligence, 2009. [2] Min Kyung Lee, et al. WeBuildAI: Participatory framework for algorithmic governance. CSCW 2019. [3] Xia, Lirong. Designing social choice mechanisms using machine learning. AAMAS 2013. [4] Zhao, Z., Liu, A., & Xia, L. Learning mixtures of random utility models with features from incomplete preferences. IJCAI 2022. [5] Newman, M. E. Ranking with multiple types of pairwise comparisons. Proceedings of the Royal Society A, 2022. [6] Conitzer, V., & Sandholm, T. Common voting rules as maximum likelihood estimators. UAI 2005. [7] Xia, Lirong, Vincent Conitzer, and Jérôme Lang. Aggregating preferences in multi-issue domains by using maximum likelihood estimators. AAMAS 2010. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: Thank you for your response.
Summary: This paper studies the learnability of social welfare functions -- which are functions over the utility of a group of voters and an outcome. Under varying information schemes, they address the question of how well it is possible to learn the social welfare function being used by a decision maker. The first setting considers learning when cardinal values of actions are knowable, which the authors point out corresponds to regression. The second setting looks at when the information given is a pair of utilitty vectors and some indication of which vector corresponds to higher social welfare. Finally, the authors consider the pairwise model when information is noisy. The paper shows that in all settings being considered, a large class of social welfare functions are learnable with a polynomial number of samples. Experiments demonstrate the existence of a practical algorithm for these results. Strengths: The problem being studied in this paper is well-defined and seems like it may be interesting. Despite the questions asked not being obscure, I am aware of little work that studies similar ideas ([17] being the exception, and I've always thought it odd that more work has not directly built atop [17]). The methodology taken in the paper seems quite reasonable. While the proofs are not included in the body the results appear correct, to my limited understanding. This problem is certainly able to inspire potential future research and provides a reasonable contribution in its own right. Weaknesses: While only incidentally a weakness of this particular paper, the state of science would be better if this were three separate papers (or a journal submission). There is simply not enough time to consider the paper in depth and the parts of the paper important for peer-review (ie. the proofs and many validation experiments) are not in the part of the paper that gets the bulk of reviewing attention so my understanding of the paper is quite limited; I found the math quite dense and open to improved clarity. As far as clarity is concerned, the paper is moderately readable but I feel that the results could be explained somewhat more clearly without requiring additional space. The introduction does an adequate job of outlining the common idea of social welfare and what problem the paper studies. I understand what the problem solves but not until the end of Section 7 is there some suggestion of why this problem might be interesting. Motivating the questions in the paper earlier would be useful. Technical Quality: 3 Clarity: 2 Questions for Authors: N/A Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Some discussion has been included but it is somewhat limited and surface level. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the risk of slightly abusing the rebuttal, we note that overall you seem to have a positive view of the paper. You write that the problem "seems like it may be interesting" and it "provides an interesting contribution in its own right," the methodology "seems quite reasonable," the results "appear to be correct," and the work "is certainly able to inspire potential future research." The weaknesses you list don't appear to be criticisms of the paper; the first seems to reflect a shortcoming of the NeurIPS reviewing timeline, and the second is that the paper is "moderately readable" with a suggestion (which we would be happy to follow) to move some of the discussion to the introduction. In light of this, we would be very grateful if you would consider updating your rating of the paper. Alternatively, we would kindly ask that you clarify the weaknesses that lead you to recommend rejection. --- Rebuttal Comment 1.1: Comment: TL;DR: The math/theory is not explained clearly enough for someone not already expert enough to write this paper themselves. Our difference of opinion is largely philosophical, and partially from my preference to write a complete review (rather than leaving sections blank). When I write, for example, that the methodology "seems quite reasonable" it is only partially a strength. My other meaning is that something has gotten lost between your keyboard (writing the paper) and my keyboard (writing the review) that makes me lack the confidence to write a stronger sentence. I suspect that our philosophical disagreement may lie in whether it is the author's job to write a paper as clearly as possible or the readers job to be smart/hardworking enough to understand it. In writing my review I went back to Ariel's 2009 paper to try to answer the question "is it possible to write something with a similar amount of depth in a clear and understandable manner?" I found that paper much more readable which suggests to me that this paper could (and, therefore, should) be more readable as well. If I were to run into this paper as a reviewer again, I would give a higher score if concepts and results were explained in a more clear manner but, for now, I maintain my current score. To be clear for meta-reviewers deciding what factors to prioritize: My score is largely based on the clarity of the paper which has prevented me from reviewing the paper in sufficient depth needed for a higher score. Not as a result of any known technical issues. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We would like to solicit any specific changes/additions you can suggest to improve the clarity of the paper, and would be happy to incorporate these. Such feedback would be very valuable to us, since we strongly believe in the interdisciplinary value of our work, and are committed to making it accessible to a broader audience. Based on the other reviewers' suggestions, we believe the following changes would make the paper easier to read: - Additional clarifications for all theorems and lemmas: While it would be difficult to incorporate proofs within the NeurIPS page limit, we can definitely add more intuition for our results. We point to our rebuttal to reviewer 5Qbq as an example, where we give intuitions behind Theorem 3.2. We will rewrite our current exposition to include these additional points. - Improved motivation for experiments: Our rebuttal to reviewer 7Nn5 contains further motivation for our experiments, and we plan to add it to the paper to better contextualize our experiment plan. - As reviewer 1QjM has suggested, we will add a primer on key learning-theoretic concepts like VC dimension, Rademacher complexity, and pseudo-dimension. Finally, while we appreciate the comparison with Procaccia et al. (2009), we note that it’s a journal paper that enjoys unlimited space, so it may set an impossibly high bar for a 9-page conference paper.
Summary: This work studies learning the social welfare function from a power mean function class. - They first consider the cardinal social welfare setting, where the data distribution is over the utilities and social welfare values. They provide the upper bounds on the pseudo-dimensions of the function class and then apply them to bound the generalization loss. - They then consider pairwise preference setting, where the data distribution is over utilities of a pair of actions and the comparison of their social welfare values. They provide bounds on the VC dimension of the function class and then apply them to bound the generalization loss. They further study two noisy settings, iid noise and logistic noise. - Finally, they conduct experiments to justify their results. Strengths: This work studies a new problem of learning social welfare function. The writing is very clear. They provide both theoretical and empirical analyses. Weaknesses: The availability of labels in the real world: In the cardinal setting, the label of the data point is the true social welfare value. I am curious if there really exists any such labeled data set. I have the same question in the pairwise comparison setting. The technical contribution seems to be limited. It looks like the results are derived by applying standard learning theory results. If there are any technical challenges in deriving the results, e.g., Lemmas 3.1 and 4.1, it would be great if the authors could address them. This is also not a new approach but standard ERM. So far, the problem studied in this work looks like a special case of general learning problems and doesn't require any new techniques. The experimental results for this problem also look consistent with the phenomena in general learning, e.g., loss decreases with decreasing noise and increasing sample size. What is the takeaway information from the experiments? Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The availability of labels in the real world: In the cardinal setting, the label of the data point is the true social welfare value. I am curious if there really exists any such labeled data set. I have the same question in the pairwise comparison setting. Let us start with pairwise comparisons. In our paper, we use data from the work of Lee et al. on food allocation, from which we derive the learned utility functions of participants/stakeholders. Their data also includes food allocation decisions made by a human decision maker (specifically, a "dispatcher"). These allocation decisions provide ordinal information and induce pairwise comparisons: for each of thousands of decisions in the data, the chosen alternative is preferred to each alternative that was not chosen. Consquently, we have the data to learn the $p$-mean welfare function optimized by the human dispatcher in practice. We did not run this experiment as it requires quite a bit of data processing and does not seem to give generalizable insights. Nevertheless, this shows that pairwise comparison data is, in fact, available, and demonstrates why we'd expect to obtain such data more generally. The cardinal setting is important to study to understand the complexity of the problem, before analyzing the harder pairwise setting. Obtaining cardinal labels is more challenging but not impossible. We're actively working on this through collaborations with domain experts in public health and disaster management. We're designing experimental platforms to elicit cardinal labels from real decision-makers in these domains. In other words, we're actively working to bridge the gap between theory and practice through ongoing collaborations and experimental design efforts. While it's difficult to convey the full scope of our work within the constraints of anonymous review, we're genuinely committed to making this approach practically viable. > The technical contribution seems to be limited. It looks like the results are derived by applying standard learning theory results. If there are any technical challenges in deriving the results, e.g., Lemmas 3.1 and 4.1, it would be great if the authors could address them. This is also not a new approach but standard ERM. So far, the problem studied in this work looks like a special case of general learning problems and doesn't require any new techniques. We disagree with this comment. While the VC and Rademacher based bounds are standard, deriving these quantities for the specific class of social welfare functions is technically challenging as we outline below. Similarly, while ERM is standard, we demonstrate how to leverage properties such as quasi-convexity of the class of social welfare functions to implement the computationally challenging ERM in practice and enable learning in real world settings. We highlight the novel technical challenges and contributions in our work below. Lemma 4.1 presented significant technical challenges. For the known weights case (4.1a), a key difficulty was bounding the number of roots of the difference of two weighted power mean functions. This required applying a little-known result by Jameson on counting zeros of generalized polynomials, leading to a tight VC dimension bound of $O(\log d)$, which scales non-trivially with d. For unknown weights (4.1b), we developed a novel proof technique involving analysis of linear dependence among vectors defined by utility differences. This required intricate combinatorial arguments to establish an $O(d \log d)$ upper bound, which isn't derivable from standard learning theory results. For the lower bound with known weights, we provided a lower bound (4.1c) using a Gray code construction inspired by Bhat and Savage, demonstrating that our upper bound in this case is tight. Beyond our lemmas, we proved quasi-convexity of our functions with respect to weights for fixed p. This property holds for individual samples but may not extend to empirical loss over multiple samples, since the empirical losses we consider involve a mean over multiple samples. Nevertheless, this observation allowed us to design a practical, performant algorithm, demonstrating real-world applicability while highlighting our problem's complexity. > The experimental results for this problem also look consistent with the phenomena in general learning, e.g., loss decreases with decreasing noise and increasing sample size. What is the takeaway information from the experiments? Our experiments offer insights beyond typical learning phenomena, specifically for learning social welfare functions. They validate our hypothesis that gradient descent with fixed $p$ is effective, despite the absence of guaranteed quasi-convexity. Using semi-synthetic data based on real-world utility vectors, we demonstrate our approach's practicality. Despite the non-convexity in $(\mathbf{w}, p)$, our algorithm consistently converges to parameters close to $(\mathbf{w}^*, p^*)$, which is not captured by our theoretical results. This convergence is crucial, as it indicates that the learned $(\hat{\mathbf{w}}, \hat{p})$ accurately captures the true weights and fairness notion with sufficient samples. Key takeaways include: 1. Empirical validation of the scaling laws established in our theoretical bounds 2. Demonstration of effective learning with realistic sample sizes using a computationally feasible algorithm 4. Evidence that learned parameters closely reflect true individual weights and fairness notions (e.g., KL divergence between true and learned weights decreases to <0.1 with sufficient samples) 4. Quantification of sample complexity increase with noise under various noise models. These results not only support our theoretical contributions but also highlight the practical applicability of our approach in learning and interpreting decision-makers' implicit social welfare functions. --- Rebuttal Comment 1.1: Comment: Thanks for the response! My concerns are addressed and I'm happy to increase my rating. I suggest the authors to include the discussion for the first two questions in the updated version. --- Reply to Comment 1.1.1: Comment: Thank you very much, we'd be glad to follow your suggestion.
Summary: The paper is about learning social welfare function which belongs to a well-studied family of weighted power mean function, as a way to understand a policy maker's rationale. In particular, the paper focuses on two settings: 1) when the input is the vector of utilities/social welfare, and 2) when the input is a pairwise comparison. The paper derives theoretical bounds for different social welfare information for different kinds of loss function with both known and unknown weights, and Strengths: The paper focuses on an interesting and important question on social welfare function learning, which has great potentials in social learning and policy making. Overall, it is well-organized and well-presented. The theoretical results are solid and elegant. Overall, the authors are careful and transparent about evaluating both the strengths and weaknesses of their work. The claims/arguments are explored in sufficient depth. Weaknesses: I find some of the results hard to interpret, for example, the bounds in Theorem 3.2. It would be great if the authors could add intuitions behind the result and better demonstrate the influence of each term. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How should I understand the "pseudo-dimensions" of $M_{w, d}$ and $M_d$ in line 161? 2. For the pairwise comparison setting, does it require access to the pairwise comparisons between all pairs of actions? Can it be extended to a setting where there's only partial comparison available, if so, how does it change the theoretical bound? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I find some of the results hard to interpret, for example, the bounds in Theorem 3.2. It would be great if the authors could add intuitions behind the result and better demonstrate the influence of each term. We appreciate the reviewer's comments and will add further intuition about the role of each term in our results to make the bounds more interpretable. Our bounds provide a characterization of the sample complexity of learning social welfare functions. Theorem 3.2, for example, provides bound for learning from cardinal utility data. The first term in both bounds of Theorem 3.2 depends on $\xi = u_{\max}(u_{\max} - u_{\min})$. This quadratic dependence on the range of utility values is expected, since the $\ell_2$ loss also scales quadratically with the magnitude of the utilities (equivalently with $\xi$). When the weights $\mathbf{w}$ are known, there is no dependence on $d$ in the first term. However, when $\mathbf{w}$ is unknown, the first term grows as $d\log d$. This means that to keep the first term roughly constant, the number of samples $n$ needs to increase proportionally to $d\log d$ as $d$ grows. This is intuitive since learning the $d$ parameters in the unknown weights case requires a proportional increase in the number of samples. The second term in both bounds is an artifact of applying the Rademacher bound. > How should I understand the "pseudo-dimensions" of $M_{w,d}$ and $M_{d}$ in line 161? The proof of Lemma 3.1 in the appendix provides the formal definitions related to pseudo-dimensions (Definition A.4 will be added for completeness). Intuitively, the pseudo-dimension plays a role similar to the VC-dimension for binary function classes, and these complexity measures directly translate into sample complexity bounds for PAC learning. We will add a short clarifying remark to this effect in Section 3. > For the pairwise comparison setting, does it require access to the pairwise comparisons between all pairs of actions? Can it be extended to a setting where there's only partial comparison available, if so, how does it change the theoretical bound? Our results hold even if we only observe comparisons for a polynomial in $d$ number of pairs for known and unknown weights respectively, as long as these pairs are drawn i.i.d. from some underlying distribution. In other words, we do not require access to pairwise comparisons between all pairs. [1] Anthony M, Bartlett PL. The Pseudo-Dimension and Fat-Shattering Dimension. In: Neural Network Learning: Theoretical Foundations. Cambridge University Press; 1999:151-164. --- Rebuttal Comment 1.1: Comment: I've carefully read the rebuttal by the authors, my evaluation of the paper remains the same.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Uncovering the Redundancy in Graph Self-supervised Learning Models
Accept (poster)
Summary: This paper presents new insights on graph self-supervised learning models. Namely, the parameters, as well as the learned representations, of graph self-supervised learning models are highly redundant. The paper also proposes a novel pre-training and fine-tuning paradigm, SLIDE, which achieves better performance with fewer number of parameters for fine-tuning. The experimental results are also remarkable, e.g., the improvements can be also achieved even with a random reduction of parameters. Strengths: - Deepening our understanding of graph self-supervised learning models is an important endeavor, given the popularity of these models. This paper is aligned with research lines and proposes some important discovers on redundancy in terms of both neuron and layer levels. - The paper is overall well-organized and clearly written. I enjoy reading it. - The experiments are clearly designed and well-executed, further demonstrating the value of this work. Overall, this paper is a valuable contribution to the field and deserves a wide audience for its discovers and insights. Weaknesses: I think it would be valuable to explain more on the empirical results. I in particular wonder why “Although SLIDE significantly reduces the number of parameters in self-supervised GNNs, SLIDE still achieves better performance than FT". Is there any (mathematical) definition on the model redundancy? It is better to verify that the proposed model does reduce the model redundancy. Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for all the comments and it is a great honor for us for your enjoying our paper. We have addressed all your questions below and hope they have clarified all confusion you had about our work. > [W\#1] I in particular wonder why “Although SLIDE significantly reduces the number of parameters in self-supervised GNNs, SLIDE still achieves better performance than FT". **Response:** Indeed, reducing the number of parameters can lead to a slight performance decrease in self-supervised GNNs, specifically when applying FT to Slim GNNs compared to Original GNNs. However, as highlighted in Section 2, we identify strong correlations among different dimensions of embeddings, emphasizing the necessity of introducing de-correlation techniques to enhance the informativeness of graph self-supervised learning models. Therefore, the improved performance of SLIDE despite having fewer parameters is attributed to the incorporation of decorrelation methods. This underscores the effectiveness of our approach in mitigating model redundancy and enhancing model performance. Moreover, in Section 4.2 of our paper we conduct ablation experiments on the de-correlation method, validating our findings in Section 2 and demonstrating its critical importance within SLIDE. > [W\#2] Is there any (mathematical) definition on the model redundancy? It is better to verify that the proposed model does reduce the model redundancy. **Response:** We appreciate and thank you for your insightful question. The redundancy of the model currently lacks a mathematical definition, but can be roughly measured using some alternative indicators. Typically, we evaluate the model redundancy by assessing the performance of the models and the correlations within the representations. In our paper, we highlight the beneficial performance of the SLIDE models, which suggests a reduction in model redundancy. As for correlations, we conduct layer-level and neuron-level correlation analyses on embeddings from both Original GNN and the SLIDE models, and these results are presented in the table below. To simplify the evaluation process, we analyze representations of randomly sampled nodes and we use MaskGAE as an example. At the layer level, we assess the correlations between representations of adjacent layers. At the neuron level, we calculate correlations between representations represented by different subsets of neurons within a single layer. All these results are computed using CKA scores. Furthermore, as mentioned in the Conclusion, we recognize the importance of supplementing theoretical analyses for model redundancy, which is a focus of our future work. We aim to provide a detailed mathematical definition of model redundancy in our future endeavors. Once again, thank you for your insightful questions and your appreciation of our work! **Layer Level:** | Module | Metric | Cora | CiteSeer | PubMed | | :-----------: | :-----------: | :----: | :------: | :----: | | | data-layer1 | 0.3939 | 0.3131 | 0.5141 | | Original GNNs | layer1-layer2 | 0.9471 | 0.9509 | 0.9405 | | | average | 0.6705 | 0.6320 | 0.7273 | | | data-layer1 | 0.3834 | 0.2680 | 0.4881 | | SLIDE | layer1-layer2 | 0.8622 | 0.8413 | 0.8945 | | | average | 0.6228 | 0.5546 | 0.6963 | **Neuron Level:** | Module | Metric | Cora | CiteSeer | PubMed | | :-----------: | :-----: | :----: | :------: | :----: | | | layer1 | 0.6957 | 0.7567 | 0.7596 | | Original GNNs | layer2 | 0.7566 | 0.7772 | 0.7652 | | | average | 0.7262 | 0.7670 | 0.7624 | | | layer1 | 0.6768 | 0.7462 | 0.7394 | | SLIDE | layer2 | 0.5641 | 0.5637 | 0.7056 | | | average | 0.6204 | 0.6550 | 0.7225 |
Summary: This paper studies the redundancy in graph self-supervised learning models. The authors discover that even randomly removing a number of parameters, the performance of graph self-supervised learning models is still comparable, revealing the redundancy problem. Then the authors propose to simultaneously fine-tune the graph self-supervised learning models and the prediction layers, and the effectiveness are well demonstrated on various benchmarks. Strengths: Discovery of Redundancy: The paper identifies an intriguing phenomenon of redundancy within graph self-supervised learning models. This finding provides valuable insights for future research directions, particularly in the full fine-tuning of these models. Clear Organization: The paper is well-organized and logical, featuring high-quality tables and figures that effectively support the presented data and analyses. Strong Performance: The performance of the proposed model is robust, showing significant improvements when compared to baseline models. Weaknesses: Some technique details in the paper are not clearly introduced, leaving several critical aspects insufficiently explained. For example, the rationale behind the use of Random Fourier Features (RFF) is not adequately addressed. It would be beneficial to provide more in-depth explanations on why RFF was chosen for this context and how it contributes to the model's performance. Additionally, the meaning and significance of Equation 2 are not thoroughly discussed, leaving readers uncertain about its role and impact within the overall framework. The paper also lacks an analysis of the learned weights 𝑊 which is crucial for understanding the model's learning dynamics and interpretability. Including such an analysis could offer valuable insights into how the model processes and prioritizes different features. Moreover, the inclusion of more experimental results would enhance the robustness of the findings. By providing a more comprehensive set of experiments, the paper could demonstrate the consistency and generalizability of the proposed approach across different scenarios and datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful for your insightful feedback and constructive suggestions. Your thorough review has significantly strengthened the quality of our manuscript. >[W\#1] About RFF **Response:** Thanks for your good question about Random Fourier Features (RFF). Deep neural networks exhibit complex dependencies between their features, where simply removing linear correlations is insufficient. A direct approach to address this challenge involves mapping features to a high-dimensional space using kernel methods. However, kernel mapping expands the original feature map to an infinite dimension, rendering it impractical to compute correlations between dimensions. Recognizing the advantageous properties of Random Fourier Features (RFF) in approximating kernel functions and measuring feature independence, SLIDE employs RFF to project original features into the function space of Random Fourier Features. Then, we eliminate the linear correlations among new features, thereby removing both linear and non-linear correlations among the original features. Without RFF, as shown in Equation 4, the regularization mechanism would degrade, focusing only on linear correlations. This highlights the critical role of RFF in enabling the model to effectively capture and model non-linear relationships. This capability significantly enhances the model's capacity to learn intricate data patterns, which is particularly advantageous in tasks such as node classification. >[W\#2] About Equation 2 **Response:** $\hat{C}\_{H\_{\*,i},H\_{\*,j}}^W=\frac{1}{N_{tr}-1}\sum_{n=1}^{N_{tr}}[(w_nu(H_{n,i})-\frac{1}{N_{tr}}\sum_{m=1}^{N_{tr}}w_mu(H_{m,i}))^T\\ \cdot (w_nv(H_{n,j})-\frac{1}{N_{tr}}\sum_{m=1}^{N_{tr}}w_mv(H_{m,j}))]$ For the meaning of Equation 2, $H_{\*, i}$ and $H_{\*, j}$ mean the i-th and j-th dimension of node embeddings. $W$ means the learnable weights of nodes in the input graph data, and $w_n$ means the weight of the n-th nodes in the training set which has $N_{tr}$ nodes. $\hat{C}\_{H\_{\*,i},H\_{\*,j}}^{W}$ is the partial cross-covariance matrix. What's more, $u$ and $v$ are the random Fourier features function space. Overall, Equation 2 aims to obtain the partial cross-covariance matrix between different dimensions of embeddings through node weighting. And for the significance of Equation 2, it facilitates the minimization of correlations across different dimensions of embeddings by learning the weights W associated with nodes. This approach enables the framework to effectively manage and optimize relationships among different dimensions of embeddings, thereby reducing the model redundancy and enhancing the model's performance. >[W\#3] About Weights W **Response:** Thanks for your valuable suggestions. Using MaskGAE on the Cora dataset, we analyze the learned weights W with respect to node degrees. We observe that nodes with smaller degrees tend to have larger weights. Across multiple epochs and runs, we identify the top five nodes with the highest weights for each epoch (referred to as "big answer"). The 8th node consistently appears prominently, representing about 10% of occurrences, while over half of the nodes rarely appear in the "big answer". These findings highlight important nodes in the dataset, suggesting further investigation into their significance. Our hypothesis links these observations to the graph's underlying structure. We examine the relationship between node degrees and weights by comparing the average degree of nodes in the training set with those of the top ten nodes most and least frequent in the "big answer". Nodes with degrees ≤ 4 average 125.0222 occurrences, while those ≥ 10 average 37.8723 occurrences, reinforcing our hypothesis. We sincerely thank for this insightful question, which has prompted further exploration into an intriguing aspect of our analysis. >[W\#4] About the consistency and generalizability of the proposed approach **Response:** Thank you for your valuable feedback and suggestion. In our study, we conduct experiments to investigate model redundancy in graph self-supervised learning models on the classical node classification tasks. These preliminary experiments lay the foundation for our findings. We acknowledge the importance of expanding our analysis to include more diverse graph learning tasks, and we have conducted initial experiments about model redundancy in graph self-supervised learning models on graph classification and link prediction tasks. The results are shown below, illustrating that model redundancy in graph self-supervised learning extends across a broad spectrum of graph learning tasks. This will allow us to further validate and generalize the effectiveness of our approach across different scenarios and datasets. We appreciate your insights and look forward to incorporating them into our future work. **Link prediction:** |Dataset|Metric|Original|Half|Quarter| |:------:|:----------:|:--------:|:--------:|:--------:| ||AUC| 96.66 | 96.27 | 93.80 | |Cora|AP|96.21|96.21|93.99| ||Change-Param |\-|49.94|74.90| ||AUC|97.84|97.12|95.52| |CiteSeer|AP|98.06|97.37|96.25| ||Change-Param |\-|49.97|74.96| ||AUC|98.79|98.31|97.38| |PubMed|AP|98.71|98.15|96.81| ||Change-Param|\-|49.84|74.76| **Graph classification:** |Dataset|Metric|Original|Half|Quarter|2-Original|2-Half|2-Quarter| |:-----------:|:----------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| |MUTAG|ACC|87.56|85.36|84.51|85.02|84.92|83.64| ||Change-Param|\-|72.08|91.53|64.61|69.99|72.68| |IMDB-BINARY|ACC|75.32|74.40|73.54|\-|75.34|75.28| ||Change-Param|\-|71.11|90.83|\-|14.16|21.23| |IMDB-MULTI|ACC|52.12|50.71|48.55|52.05|52.01|51.91| ||Change-Param|\-|73.23|92.42|37.38|46.73|51.41| |REDDIT-BINARY|ACC|88.24|85.93|82.47|\-|88.03|87.93| ||Change-Param|\-|69.70|89.78|\-|13.21|19.82|
Summary: This paper is the first to uncover that graph self-supervised models exhibit high model redundancy at both neuron and layer levels, providing two key perspectives for graph pre-training and fine-tuning framework. This paper proposes SLIDE to achieve a pre-training and fine-tuning paradigm with fewer parameters and better performance on the downstream task. The authors conduct comprehensive experiments, showing superior performance. Strengths: Overall, this paper is in general of good quality that it is well organized and in general clearly written. The motivation is clear and strong. The problem under investigation is an interesting problem and the work offers some important discovers and results for this problem. The proposed method is simple, effective, and well-motivated with excellent performance. Some results are particularly impressive (e.g., even randomly reducing 40% of parameters, the improvement is still 0.24% and 0.27%) Weaknesses: The task evaluated in this paper is only node classification. I’m curious what about other tasks? It might be important to check the correlation between the model redundancy problem and node classification, or the problem is task-independent? The authors point out that the proposed model is orthogonal to other fine tuning methods. A detailed discussion or more experimental results would be a plus. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness section Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere gratitude for the thoughtful review and constructive feedback. Your thorough evaluation has undoubtedly contributed to enhancing the robustness of our findings. >[W\#1] The task evaluated in this paper is only node classification. I’m curious what about other tasks? It might be important to check the correlation between the model redundancy problem and node classification, or the problem is task-independent? **Response:** Thanks for your question. We choose node classification as a representative task for evaluating model redundancy in graph self-supervised learning due to its classic and widely studied nature in graph learning. However, the issue of model redundancy is not limited to node classification. We further conduct similar experiments on other graph learning tasks, i.e., graph classification and link prediction, using GraphMAE and MaskGAE, respectively. The results are consistent with those observed in node classification, demonstrating that the problem of model redundancy is indeed pervasive across different graph self-supervised learning tasks. We appreciate your interest in this aspect and hope that these additional results provide a comprehensive understanding of the issue across various tasks. **Link prediction:** | Dataset | Metric | Original | Half | Quarter | | :------: | :----------: | :--------: | :--------: | :--------: | | | AUC | 96.66±0.08 | 96.27±0.06 | 93.80±0.20 | | Cora | AP | 96.21±0.17 | 96.21±0.06 | 93.99±0.22 | | | Change-Param | \- | 49.94 | 74.90 | | | AUC | 97.84±0.12 | 97.12±0.11 | 95.52±0.06 | | CiteSeer | AP | 98.06±0.10 | 97.37±0.07 | 96.25±0.04 | | | Change-Param | \- | 49.97 | 74.96 | | | AUC | 98.79±0.02 | 98.31±0.03 | 97.38±0.04 | | PubMed | AP | 98.71±0.03 | 98.15±0.06 | 96.81±0.06 | | | Change-Param | \- | 49.84 | 74.76 | **Graph classification:** | Dataset | Metric | Original | Half | Quarter | 2-Original | 2-Half | 2-Quarter | | :-----------: | :----------: | :--------: | :--------: | :--------: | :--------: | :--------: | :--------: | | MUTAG | ACC | 87.56±0.43 | 85.36±0.42 | 84.51±0.61 | 85.02±0.92 | 84.92±1.00 | 83.64±1.13 | | | Change-Param | \- | 72.08 | 91.53 | 64.61 | 69.99 | 72.68 | | IMDB-BINARY | ACC | 75.32±0.56 | 74.40±0.64 | 73.54±0.36 | \- | 75.34±0.54 | 75.28±0.65 | | | Change-Param | \- | 71.11 | 90.83 | \- | 14.16 | 21.23 | | IMDB-MULTI | ACC | 52.12±0.42 | 50.71±0.55 | 48.55±0.60 | 52.05±0.53 | 52.01±0.69 | 51.91±0.72 | | | Change-Param | \- | 73.23 | 92.42 | 37.38 | 46.73 | 51.41 | | REDDIT-BINARY | ACC | 88.24±0.12 | 85.93±0.61 | 82.47±0.98 | \- | 88.03±0.19 | 87.93±0.21 | | | Change-Param | \- | 69.70 | 89.78 | \- | 13.21 | 19.82 | >[W\#2] The authors point out that the proposed model is orthogonal to other fine tuning methods. A detailed discussion or more experimental results would be a plus. **Response:** Thanks for your thoughtful feedback and interest in our research. In our study, we have explored how SLIDE differs from previous fine-tuning approaches. Specifically, to demonstrate the orthogonality of our method with traditional fine-tuning approaches, we employ the following approach: we started with the classic fine-tuning method LoRA on GraphMAE, where our SLIDE model randomly prunes a subset of neurons to create frozen Slim GNNs. Subsequently, we introduce an additional LoRA module specifically designed for fine-tuning. To further validate our approach, we apply the de-correlation methods to adjust the LoRA module, aiming to reduce the correlation between embeddings within the Slim GNNs integrated with LoRA modules. The results are shown in the table, where "Slim-LoRA" denotes direct fine-tuning of Slim GNNs using LoRA and "SLIDE-LoRA" is our method. | Dataset | Metric | Linear-probing | LoRA | Slim-LoRA | SLIDE-LoRA | | :------: | :----: | :------------: | :---------------: | :--------: | :---------------: | | Cora | ACC | 83.96±0.12 | 84.18±0.34 | 83.62±0.29 | **84.26±0.43** | | CiteSeer | ACC | 73.26±0.24 | 73.27±0.36 | 72.88±0.51 | **73.37±0.57** | | PubMed | ACC | 80.62±0.17 | **80.69±0.61** | 80.36±0.63 | 80.63±0.65 | SLIDE's reduction of model parameters enables fine-tuning of the entire model, highlighting a distinct advantage. However, in SLIDE-LoRA which applies SLIDE based on LoRA, the parameters of Slim GNNs cannot be adjusted; only the parameters of LoRA modules can be modified. This limitation impacts SLIDE-LoRA's performance. Nevertheless, LoRA serves as a coarse method for adjusting model parameters, enabling SLIDE-LoRA to enhance model performance by reducing correlations among final representations. We achieve slightly superior results compared to using LoRA directly on Original GNNs, underscoring the efficacy of SLIDE-LoRA in enhancing model capabilities. The experiment provides empirical evidence supporting the feasibility and efficacy of our method in enhancing model performance and the orthogonality of our method with traditional fine-tuning approaches. Once again, we sincerely appreciate your valuable feedback and hope that these insights clarify the distinctiveness and effectiveness of SLIDE in the context of fine-tuning methodologies. --- Rebuttal Comment 1.1: Comment: I would like to thank the author for the response, which has solved my concerns. Hence, I would like to raise my score.
Summary: This paper studies the redundancy issue of GNNs that are pre-trained in the self-supervised manners. Examples of these methods include GraphMAE and GRACE. The first part of this paper shows that the numbers of parameters of these models can be reduced by half, while their performance wouldn’t change much (only drops to 96.2% of the original GNNs). This is a very interesting finding. The second part of the paper presents a pre-training and fine-tuning paradigm called SLIDE. It aims to fine-tune a slim GNN (a half-sampled pre-trained GNN), along with a prediction layer and weights of the nodes, so the node classification performance could be improved (comparing to other fine-tuning method). Strengths: 1. The finding presented in the first part of the paper is very interesting. The redundancy found in pre-trained weights of GNN via self-supervised learning could help simplification of GNNs and their deployment in low-resource computing environments. 2. The empirical study of the redundancy issue in the first part is reasonable, and covers two representative graph self-supervised learning models (GraphMAE and GRACE). Therefore, the results are meaningful and insightful. 3. The paper overall is well written, and easy to follow. Weaknesses: 1. The study of the redundancy issue in the first part is solely on the node classification task. Although node classification is the most common problem in graph learning, GNN pre-trained models are also widely used for link prediction and the entire graph classification. The further study of the redundancy issue for other tasks would make this work more solid. 2. The second part of the work (the SLIDE model) is a less valuable contribution. It is unclear about the motivation for “de-correlating the learned embeddings H in the fine-tuning phase, making models with fewer parameters more informative”. The “Slim GNN” preserved most of the useful weights already. Encouraging the independence of the node embeddings or not doesn’t change much the performance in downstream tasks. In Fig. 5, models without “de-correlation” only have a slight performance decrease. However, including “de-correlation” can level up the computational complexity. 3. The comparison in Fig.6 with “full fine-tune” is unfair. Why not including the “linear probing” for comparison? fine-tuning only the additional classifier and freezing the original GNNs). The number of parameters in “linear probing” is much less, while the performance of “linear probing” is not that bad (as shown in Table 3-5). Technical Quality: 3 Clarity: 3 Questions for Authors: While I really like the first part of the work, I have the following questions: 1. What datasets were used for pre-training these GNNs before getting the slim GNNS? 2. Any possibility to provide theoretical analysis for the redundancy issue? 3. Did authors run statistical tests for comparing the performance of LP, FT and SLIDE in Table 3-5? The performance drop indicated in red color is often smaller than the std. So it is unknown if the performance difference is statistically significant. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for valuable opinions and concerns about our work. It is our obligation to describe more details and give more explanations. We really hope that these further efforts can alleviate your confusion. >[W\#1] About the issue of redundancy **Response:** To demonstrate that redundancy exists across a broader spectrum of graph learning tasks, we conduct experiments on GraphMAE, effective for graph classification, and MaskGAE which excels in link prediction. The results indicate that model redundancy exists across a wide range of tasks. Link prediction: |Dataset|Metric|Original|Half|Quarter| |:------:|:----------:|:--------:|:--------:|:--------:| ||AUC|96.7|96.3|93.8| |Cora|AP|96.2|96.2|94.0| ||Change-Param|\-|49.9|74.9| ||AUC|97.8|97.1|95.5| |CiteSeer|AP|98.1|97.4|96.3| ||Change-Param|\-|50.0|75.0| ||AUC|98.8|98.3|97.4| |PubMed|AP|98.7|98.2|96.8| ||Change-Param|\-|49.8|74.8| Graph classification: |Dataset|Metric|Original|Half|Quarter|2-Original|2-Half|2-Quarter| |:-----------:|:----------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| |MUTAG|ACC|87.6|85.4|84.5|85.0|84.9|83.6| ||Change-Param|\-|72.1|91.5|64.6|70.0|72.7| |IMDB-B|ACC|75.3|74.4|73.5|\-|75.3|75.3| ||Change-Param|\-|71.1|90.8|\-|14.2|21.2| |IMDB-M|ACC|52.1|50.7|48.6|52.1|52.0|51.9| ||Change-Param|\-|73.2|92.4|37.4|46.7|51.4| |REDDIT-B|ACC|88.2|85.9|82.5|\-|88.0|87.9| ||Change-Param|\-|69.7|89.8|\-|13.2|19.8| >[W\#2] About SLIDE **Response:** Thanks. We appreciate the opportunity to clarify the contributions and the motivations behind our approach. Our research has demonstrated the existence of model redundancy in graph self-supervised learning models. Even though we remove a portion of parameters, model redundancy may still occur during fine-tuning. Therefore, based on our findings, we believe that it is necessary to introduce a de-correlation module during fine-tuning. Our experiments have also shown that integrating such a module improves the performance. We observe that models without de-correlation show a noticeable decrease in performance compared to models with de-correlation by conducting statistical significance tests. Specifically, we present the p-values from experiments conducted across 6 datasets using MaskGAE. As can be seen, p-values are less than 0.01 on most of them. These results underscore the statistical significance of the performance differences between models with and without de-correlation. |Cora|CiteSeer|PubMed|Photo|Computers|arxiv| |:------:|:------:|:------:|:----------:|:--------------:|:--------:| |*0.00026|*0.013|*0.0032|*0.0013|*0.027|*0.0095| Regarding computational complexity, the de-correlation method we introduced maintains linear complexity relative to the number of nodes. This ensures that the additional computational cost is minimal compared to fine-tuning Slim GNNs without de-correlation, preserving the efficiency of the models. >[W\#3] About comparison **Response:** Thanks. The motivation behind Fig. 6 is to show that despite reducing the parameters, SLIDE maintains comparable performance. We show that our approach effectively reduces the parameter tuning required compared to full fine-tuning of Original GNNs. Actually, in Section 2 we have already compared the performance and parameter reduction achieved by "linear probing" between Original GNNs and Slim GNNs. Only classifiers are fine-tuned while GNNs are frozen. This results of the comparisons are explicitly detailed in Table 1-2, which also implicitly indicate the proportion of parameters attributed to linear layers (50% and 25%). Furthermore, our experimental results indicate that despite significant parameter reduction, the performance of Slim GNNs does not exhibit a noticeable degradation. >[Q\#1] About datasets **Response:** As mentioned in Section 2, we pretrain Original GNNs on six datasets (Cora, CiteSeer, PubMed, Amazon-Photo, Amazon-Computers and Ogbn-arxiv), and obtain several kinds of Slim GNNs. >[Q\#2] About theoretical analysis **Response:** Thanks. As highlighted in the conclusion, we agree with you and recognize the importance of theoretical analysis in enhancing our understanding. Theoretical analysis of model redundancy represents a significant avenue for future research. Enhancing the theoretical foundations of model redundancy will be crucial for advancing our methodologies. We could potentially explore theoretical models from several angles. For instance, we could theoretically analyze model redundancy from the perspectives of feature decorrelation. Investigating causal mechanisms within GNN architectures could also offer formal insights into how redundant features affect model predictions. Additionally, developing mathematical formulations to quantify redundant features could offer intuitive insights and further optimize GNN efficiency. We appreciate your insightful question and recognize theoretical analysis as a critical next step in advancing our understanding and methodologies for addressing redundancy within graph self-supervised learning models. >[Q\#3] About statistical tests **Response:** We have performed significance tests and calculated p-values to evaluate the performance differences between our method and previous approaches. The p-values for most of comparisons are less than 0.05, indicating that the performance differences are statistically significant. As an example, we present significance test results for MaskGAE in the table below. ||Cora|CiteSeer|PubMed|Photo|Computers|Arxiv| |----|:------:|:------:|:------:|:----------:|:--------------:|:-------:| |LP|*0.00003|*0.00043|0.42|*0.00002|*0.0000003|*0.011| |FT|*0.0035|*0.0077|*0.035|*0.046|*0.0086|0.91| Observing the results, out of 12 comparisons, 10 pass the significance test with p-values less than 0.05. One exception is observed in the comparison between SLIDE and FT on Ogbn-arxiv, where SLIDE achieves comparable performance to FT despite using parameters that are 67% fewer.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Boosting Semi-Supervised Scene Text Recognition via Viewing and Summarizing
Accept (poster)
Summary: This paper proposes a ViSu framework to enhance text recognition by viewing and summarizing. For the viewing process, the authors generate various text images to lead the model to focus on different text styles. For the summarizing process, they analyze the drawback in existing character consistency loss and propose a CUA loss to cluster the character under different styles. As a result, the proposed framework outperforms other existing methods on multiple datasets. Strengths: 1. The design of OGS can obtain promising improvements without additional human cost. 2. The authors point out that previous consistency loss mistakenly regards some positive samples as negative samples, which is an important theoretical drawback in existing metric learning methods. And the proposed CUA loss provides a solution for this issue. 3. The proposed method gets SOTA performance on both common and challenging datasets. 4. The proposed framework can be applied on multiple text recognition models seamlessly. Weaknesses: 1. To further verify the effectiveness of OGS. Authors should compare the performance of replacing OGS with normal synthetic data. 2. Tab.5 lacks the comparison with the baseline model. 3. In Tab.5, it seems that MAERec-B performs better on the Union-14m dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As the semi-supervised framework can use large-scale unlabeled real data for training, why does ViSu require OGS to generate synthetic data? 2. To verify the fairness, do other semi-supervised methods in Tab. 1 and 5 have the same implementation details as ViSu? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the feedback. In the following we address the weaknesses and questions pointed out by the reviewer. > Q1: To further verify the effectiveness of OGS. Authors should compare the performance of replacing OGS with normal synthetic data. A1: Replacing OGS with normal synthetic data means adding a background to the synthetic samples, incorporating color to the text, and using the same fonts and orientations for the characters within each sample. We have validated these elements separately in Appendix. B.4. Real scene text images often have characters with consistent orientations, thus generating characters with the same orientation in one sample is advantageous. Because OGS samples serve as base images and aim to enrich character morphologies without noises, introducing extra backgrounds and colors is redundant. However, applying randomized fonts to each character is shown to be effective for enhancing the robustness of the model. > Q2: Tab.5 lacks the comparison with the baseline model. A2: Thanks for pointing this out. Our method aims to elevate the performance upper bound of model recognition without incurring additional human costs, relying on simple synthetic labeled data and real unlabeled data. Table. 5 is to demonstrate that our method still performs well when introducing manual annotations, so there is no obvious definition of baseline. We show the results of training with only real labeled data below. From the experimental metrics, we can see that our ViSu can still yield further improvement with the introduction of manual annotations. | Method | Datasets | IIIT | SVT | IC13 | IC15 | SVTP | CUTE | WAVG | Cur | M-O | Art | Con | Sal | M-W | Gen | AVG | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | Baseline | RL | **98.6** | 98.0 | **98.3** | 89.4 | 94.3 | 96.5 | 95.7 | 88.8 | 95.8 | 78.1 | 83.4 | 86.3 | 83.6 | 82.7 | 85.5 | ViSu | RL+RU | 98.5 | **98.3** | 97.8 | **90.4** | **96.3** | **97.6** | **96.0** | **90.7** | **96.1** | **79.4** | **85.4** | **87.1** | **86.3** | **83.1** | **86.9** > Q3: In Tab.5, it seems that MAERec-B performs better on the Union-14m dataset. A3: MAERec-B performs better accuracy than ours on the some benchmarks of Union14M-Benchmark. However, our method achieves 1.7% higher average accuracy. It is worth noting that, as shown in Table. 7, MAERec-B has approximately 5 times the number of model parameters and about 17 times the inference time. Our ability to achieve higher accuracy with a much lighter model underscores the effectiveness of our method. Compared to MAERec-S, which has 1.4 times the number of parameters, the inference time is about 16 times longer than ours, and the accuracy is significantly lower than ours (78.6% vs 86.9%). > Q4: As the semi-supervised framework can use large-scale unlabeled real data for training, why does ViSu require OGS to generate synthetic data? A4: Both real unlabeled data and labeled data are essential for a semi-supervised framework. Semi-supervised scene text recognizers require annotated data to provide the model with basic character recognition capabilities. Our OGS is designed to enrich the character morphology and maximize the potential of simple data. Therefore, using both real unlabeled data and OGS samples simultaneously is complementary, as both aim to improve the performance from different perspectives. > Q5: To verify the fairness, do other semi-supervised methods in Tab. 1 and 5 have the same implementation details as ViSu? A5: The methods marked with * in Table. 1 and Table. 5 mean that we evaluate the officially released checkpoints. The absence of any mark means the metrics reported in the paper. The above methods follow the same evaluation metrics as ours. For the methods marked with $\dagger$, such as TRBA-cr, we reimplement it with the same training data, but the training configuration follows the official. CRNN-ViSu and TRBA-ViSu use the same training data as ViSu. As detailed in Appendix B.1, they both adopt OGS, CUA loss, URF, and a 0.5 probability of rotation by 180 degrees. The batchsize is set to 384 for both synthetic data and real unlabeled data. --- Rebuttal Comment 1.1: Comment: Thank you for your effort on the response. All my concerns have been properly addressed. I also reviewed the comments from the other reviewers, and I think most of the concerns have been addressed. Additionally, I suggest incorporating the evaluation improvement experiments from the rebuttal into the paper to better evaluate the model. --- Reply to Comment 1.1.1: Comment: Thanks for your response and suggestions. We will integrate these details in the final version.
Summary: This paper focuses on the character morphologies and proposes to boost the scene text recognizer through viewing and summarizing paradigm. In the viewing process, the Mean Teacher framework is used to train with unlabeled data. In the summarizing process, the proposed method theoretically proves the mistakes in the previous method and proposes a new loss function. I have read the author responses and the comments of other reviewers, I would recommend the 'accept' score. Strengths: 1. The proposed method achieves good results, especially on challenging benchmarks. 2. The paper is well written. It clearly points out the problem in the previous method and provides a detailed explanation. The proposed OGS and Mean Teacher framework effectively increase the diversity of character glyphs. The CUA loss corrects the previous error and better achieves clustering of identical characters. 3. There are lots of experiments to prove the effectiveness of the proposed method. Weaknesses: 1. There is a lack of clear description in the paper on how CRNN-ViSu and TRBA-ViSu in Table 1 are set up and trained. 2. Although the proposed method achieves high average accuracy on common benchmark and Union14B-Benchmark, it is not SOTA on certain benchmarks, such as SVT, SVTP, CUTE, Curve, Contextless, MultiWords. 3. When the proposed method recognizes text with vertical reading order, how to decide whether to rotate it 90 degrees clockwise or counterclockwise. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations and potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review. We hope these responses will address your concerns appropriately. > Q1: There is a lack of clear description in the paper on how CRNN-ViSu and TRBA-ViSu in Table 1 are set up and trained. A1: For a fair comparison, CRNN-ViSu and TRBA-ViSu use the same training data as ViSu, namely MJSynth, SynthText, and Union14M-U. As detailed in Appendix B.1, they both incorporate OGS, CUA loss, URF, and a 0.5 probability of rotation by 180 degrees. The batchsize is set to 384 for both synthetic data and real unlabeled data. > Q2: Although the proposed method achieves high average accuracy on common benchmark and Union14B-Benchmark, it is not SOTA on certain benchmarks, such as SVT, SVTP, CUTE, Curve, Contextless, MultiWords. A2: As shown in Table. 1, the SOTA method for SVT, SVTP and Curve is TRBA-cr*[54], which uses different traning data. When we reimplement TRBA-cr$\dagger$ with the same training data, our ViSu surpasses it by 1.5%, 1.2%, and 1.1% on SVT, SVTP, and Curve, respectively. A similar situation occurs with the SOTA method TRBA-pr[1] for Contextless and Multi-Words, which uses real labeled and unlabeled data for training. For the CUTE benchmark, with its small sample size of only 288, our method is 1.1% (3 samples) lower than MATRN [25]. Small sample sizes can easily lead to error fluctuations and instability. However, MATRN is 0.7% to 2.7% lower than our method on other common benchmarks and Union14M-Benchmark, demonstrating the stability of our method. > Q3: When the proposed method recognizes text with vertical reading order, how to decide whether to rotate it 90 degrees clockwise or counterclockwise. A3: For text images with a vertical reading order, the model will recognize the results after rotating the image by both 90 and 270 degrees and will select the one with the higher confidence as the final recognition result.
Summary: Existing scene text recognition (STR) methods struggle to recognize challenging texts, which originates from the insufficient exploration of character morphologies. To address the issues, the paper proposes to facilitate the contrastive learning-based STR framework in a self-motivated manner by leveraging synthetic and real unlabeled data without any human cost. An Online Generation Strategy is proposed to enrich the diversity of characters in training data. Besides, a new Character Unidirectional Alignment Loss is proposed for aligning the character features in the student model with the reference features in the teacher model. Extensive experiment results show the effectiveness of the proposed method. Strengths: 1) The paper proposes to improve the recognition for challenging texts, especially for artistic and severely distorted characters within a mean-teacher framework without the requirement for real labeled data. 2) An Online Generation Strategy is proposed to enrich the diversity of characters in training data without need of psuedo labeling real data explicitly. 3) A Character Unidirectional Alignment Loss is proposed to improve the existing Character Contrastive (CC) Los [45] in character representation learning. 4) Extensive experiments on Common Benchmarks, Union14M-Benchmark, and other challenging benchmarks demonstrate the effectiveness of the proposed method. Weaknesses: 1) The overall framework follows a mean teacher framework, which is not new for semi-supervised scene text recognition. 2) In Table 1, the baseline model without RU data has achieved a high performance than existing SOTA. Is there any special design in the baseline model? 3) Why not simply remove the second item in equation, i.e., exclude the positives in the denominator? 4) Absense of comparison between the proposed method and other methods in Sec. 2.3. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weakness part. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes. The limiation is well descirbed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and address each concern below. > Q1: The overall framework follows a mean teacher framework, which is not new for semi-supervised scene text recognition. A1: While certain methods, such as Zheng et al.[54], also employ the mean teacher framework for semi-supervised OCR, they do not take the character morphologies into consideration, making them suboptimal. We adopt the mean teacher framework to incorporate real unlabeled data, with our primary contribution being the proposal of a novel training paradigm through viewing and summarizing. Overall, we make two significant improvements to the mean teacher framework: (1). The conventional mean teacher framework usually only utilizes information from unlabeled data without additional exploration of labeled data. However, our method proposes the Online Generation Strategy (OGS) to further exploit labeled data. With the equipment of OGS, the viewing process enriches the diversity of character morphologies, facilitating the model to generalize the ability to recognize challenging samples using only simple synthetic training data. (2). The ordinary mean teacher framework aligns the features of two perspectives of one unlabeled sample without optimization for text-specific characteristics. Our method enhances the robustness of the model to character morphology during the summarizing process. On the one hand, Unified Representation Forms (URF) is used to unify text image reading order and character orientation. On the other hand, Character Unidirectional Alignment (CUA) loss is adopted to theoretically correct the previous formula error and obtain unified features for each character category. Benefiting from the above exploration of the monotonousness of training data and sensitivity of the model to character morphologies, our method achieves SOTA performance on all benchmarks, with particularly notable superiority on the challenging Union14M-Benchmark. > Q2: In Table 1, the baseline model without RU data has achieved a high performance than existing SOTA. Is there any special design in the baseline model? A2: Our baseline model incorporates URF. As described in lines 269-270, the same applies to CRNN and TRBA baselines. As shown in Table 1, our baseline achieves higher accuracy only on the Multi-Oriented (M-O) of Union14M-Benchmark. This is because URF is particularly effective at handling text with vertical and reverse reading orders, which are prevalent in M-O. In the following, we further present the performance of the baseline model without URF. The experiments indicate that our ViSu equipped with URF significantly excels in recognizing challenging texts. We will also release the code. | Method | Datasets | IIIT | SVT | IC13 | IC15 | SVTP | CUTE | WAVG | Cur | M-O | Art | Con | Sal | M-W | Gen | AVG | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | Baseline without URF | SL | 96.1 | 94.0 | 95.9 | 85.8 | 87.8 | 89.9 | 92.3 | 58.0 | 17.0 | 52.9 | 55.4 | 64.9 | 54.8 | 61.3 | 52.0 | Baseline | SL | 96.1 | 94.4 | 96.5 | 86.0 | 88.2 | 88.5 | 92.5 | 60.5 | 82.7 | 53.6 | 54.0 | 70.6 | 54.5 | 62.0 | 62.6 | ViSu | SL+RU | **97.6** | **96.1** | **98.3** | **89.3** | **91.3** | **92.4** | **94.7** | **71.6** | **85.8** | **66.8** | **57.6** | **80.3** | **62.6** | **71.2** | **70.9** > Q3: Why not simply remove the second item in equation, i.e., exclude the positives in the denominator? A3: The second term in the denominator represents the feature similarity between the character and all identical characters in the mini batch except for itself. When optimizing CUA loss, the feature distances of all characters within the same category are minimized, i.e., clustering the same characters. As illustrated in Fig. 1(b), the same character can have very different representation forms. This optimization process plays a crucial role in extracting the visual feature commonalities, which can help the network enhance its robustness to character morphologies. Experimental results supporting this are presented below. | Method | Cur | M-O | Art | Con | Sal | M-W | Gen | AVG | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | without CUA Loss | 69.9 | 83.9 | 64.1 | 54.9 | 76.0 | 60.8 | 67.6 | 68.2 | CUA Loss without second term | 71.1 | 85.0 | 65.3 | **57.9** | 79.5 | 62.4 | 70.6 | 70.3 | CUA Loss | **71.6** | **85.8** | **66.8** | 57.6 | **80.3** | **62.6** | **71.2** | **70.9** > Q4: Absense of comparison between the proposed method and other methods in Sec. 2.3. A4: In Sec. 2.3, we reference Baek et al.[1], ABINet[10], Zheng et al.[54], Gao et al.[11], and Seq-UPS[27]. We compare ViSu with [1][10][54] in Sec. 4. However, [11] and [27] did not release their code or checkpoints, preventing detailed comparisons on metrics such as performance on the Union14M Benchmark, parameters, and speed. In the following, we compare the metrics reported in their papers with our method. The experimental results indicate that our ViSu significantly outperforms these methods on several common benchmarks. | Method | IIIT | SVT | IC13 | IC15 | SVTP | CUTE | ---- | ---- | ---- | ---- | ---- | ---- | ---- | Gao et al.[11] | 74.8 | 78.1 | 81.2 | 54.7 | - | - | Gao et al.[11] (ensemble) | 76.8 | 80.8 | 84.5 | 57.6 | - | - | Seq-UPS[27] | 92.7 | 88.6 | 92.2 | 76.9 | 78.8 | 84.4 | Seq-UPS[27] w/ SeqCLR (All-to-instance) | 92.3 | 87.2 | 91.8 | 77.9 | 78.9 | 85.4 | Seq-UPS[27] w/ SeqCLR (Frame-to-instance) | 92.8 | 86.7 | 92.6 | 77.4 | 79.2 | 86.1 | Seq-UPS[27] w/ SeqCLR (Window-to-instance) | 93.1 | 86.7 | 91.7 | 76.8 | 81.4 | 85.9 | ViSu | **97.6** | **96.1** | **98.3** | **89.3** | **91.3** | **92.4** --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Smtd Comment: Thanks for the response by the authors. All my concerns have been properly addressed. After considering the rebuttals for all the reviews, I'd like to raise the score to WA. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the discussion and updating the score. We will further revise our paper in the final version.
Summary: This paper addresses the problem of insufficient exploration of character morphology in scene text recognition and proposes a new framework comprising an Online Generation Strategy (OGS) and Character Unidirectional Alignment (CUA) Loss to enable the model to learn from unlabeled real data. OGS mitigates the issue of data scarcity, while CUA aids in clustering characters. Comprehensive experiments demonstrate the effectiveness of the proposed method. Strengths: 1. The proposed method effectively addresses the issue of excessive representation methods for scene images through Unified Representation Forms. 2. The errors in the previous Character Contrastive Loss method are corrected. 3. The idea of utilizing character morphology is novel. Weaknesses: 1. The writing quality needs improvement. 2. There is a lack of detailed explanation regarding the Online Generation Strategy (OGS) and Unified Representation Forms (URF). Technical Quality: 3 Clarity: 2 Questions for Authors: The proposed method struggles with handling images with extreme aspect ratios. The authors attempt to address this by splitting the image, recognizing each part, and then concatenating them. However, this splitting operation may interfere with character recognition and degrade the overall recognition quality. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback provided. Below, we address the identified weaknesses and questions. > Q1: The writing quality needs improvement. A1: We appreciate your observation regarding the writing quality. We will refine and enhance the clarity of the writing. > Q2: There is a lack of detailed explanation regarding the Online Generation Strategy (OGS) and Unified Representation Forms (URF). A2: Thanks for highlighting this issue. We will revise these sections in the subsequent version. Unified Representation Forms: Different from typical visual objects, text images are characterized by serialized information, necessitating a specific reading order dictated by linguistic rules. Moreover, scene text images frequently exhibit rotation, resulting in varied character orientations. As illustrated in Fig. 2, the inconsistent representation forms caused by multiple combinations of reading orders and character orientations undoubtedly increase the difficulty of network convergence. Besides, because scene text images with vertical reading orders often have small aspect ratios, leading to significant deformation of visual information when resized, rendering the characters unrecognizable. To accommodate vertical and horizontal reading orders, we intuitively rotate them clockwise to unify the reading order to horizontal, which is referred to as Unified Representation Forms (URF). Specifically, an image with a height-to-width ratio exceeding a certain threshold will be rotated, otherwise it remains unchanged. It is easy to notice that the existing four representations can be further distilled into two primary forms with left-right reading order and their counterparts rotated by 180 degrees. This simplification and unification alleviate the issue of inconsistent representation forms and lay the foundation for the following Online Generation Strategy (OGS) process. Online Generation Strategy: The problem of diverse character morphology primarily stems from the simplistic and uniform training samples in synthetic datasets. Despite the abundance of samples in synthetic datasets, the homogeneity in character styles restricts the model's ability to acquire useful information for recognizing challenging texts in real scenarios. To address this problem, we propose to generate text samples in two primary forms and their 180-degree rotations based on their labels. As depicted in Fig. 2, these samples are background-free but with diverse character styles. Following the characteristics of real samples, all character orientations within a generated sample are consistent. Random selection of font, position, and character spacing enhances the diversity of character morphologies. Both synthetic training images and their corresponding online-generated samples are concurrently fed into our semi-supervised framework. By eliminating background noise and enriching the diversity of character styles, these samples encourage the model to concentrate on character morphology and generalize the ability to recognize complex texts from simple synthetic training data. > Q3: The proposed method struggles with handling images with extreme aspect ratios. The authors attempt to address this by splitting the image, recognizing each part, and then concatenating them. However, this splitting operation may interfere with character recognition and degrade the overall recognition quality. A3: (1). In practical application scenarios, text recognizers are typically attached to text detectors, working collaboratively to complete the OCR process. Therefore, employing a word-level text detector instead of a line-level detector can mitigate this issue. (2). When encountering long texts with extreme aspect ratios, resizing them to a preset aspect ratio leads to a loss of contextual information, which is a common problem among text recognizers. Although slicing long texts for recognition may indeed cause some degradation, but it still yields better results than attempting to recognize the entire text in one pass, which is a limitation of the current model. Future specialized improvements and optimizations can be developed to enhance the recognition of long texts.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
COLD: Causal reasOning in cLosed Daily activities
Accept (poster)
Summary: This paper proposes the COLD (Causal reasOning in cLosed Daily activities) framework, aiming to bridge the gap between open-ended causal reasoning and symbolic representation-based question answering. The framework leverages human understanding of daily real-world activities to reason about the causal nature of events. The authors create a large set of causal queries and evaluate multiple Large Language Models (LLMs) on these queries. The findings show that causal reasoning is challenging for LLMs, even for activities considered trivial for humans. The authors also explore the causal reasoning abilities of LLMs using the backdoor criterion. The key contributions of this work are the development of the COLD framework, the creation of a substantial number of causal queries, and the evaluation of LLMs' performance on causal reasoning tasks. The findings highlight the need for further analysis using real-world events to properly validate LLMs' understanding of causality. Strengths: - The COLD framework effectively bridges the gap between open-ended causal reasoning and symbolic representation-based question answering, utilizing human understanding of daily activities as a solid foundation. - The paper addresses the crucial issue of evaluating LLMs' causal reasoning capabilities, emphasizing the significance of investigating and validating their intellectual capabilities. - The paper is well-written, with clear explanations, logical flow, and concise language, ensuring effective communication of key points. - The evaluation of multiple LLMs on a large set of causal queries reveals limitations in their causal reasoning abilities, while the exploration using the backdoor criterion provides valuable insights into causal strength between events. Weaknesses: Generally, I believe this paper makes good contributions. However, there are some minor issues that need to be addressed: - Since the queries are mostly automatically generated, it is necessary to support them with human annotations or expert evaluations in order to confirm the reliability of the generated queries. - The observational graphs are created through human annotations, which limits their capacity to cover a wide range of concepts. It would be better to discuss automated approaches for constructing such graphs in order to facilitate large-scale causal benchmarking. - Can the synthesized queries be used for fine-tuning? I am interested in whether splitting the observation graphs into different sets and training them on queries synthesized from the training graphs would significantly improve performance. This could greatly enhance the comprehensiveness of the paper. - Additionally, another set of baselines focusing on zero-shot commonsense question answering should be evaluated as well. It would be interesting to see whether transformations from commonsense knowledge bases can benefit causal reasoning tasks. I recommend checking these papers for reference. - Ma, K., Ilievski, F., Francis, J., Bisk, Y., Nyberg, E., & Oltramari, A. (2021, May). Knowledge-driven data construction for zero-shot evaluation in commonsense question answering. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 15, pp. 13507-13515). - Wang, W., Fang, T., Ding, W., Xu, B., Liu, X., Song, Y., & Bosselut, A. (2023, December). CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 13520-13545). - Kim, Y. J., Kwak, B. W., Kim, Y., Amplayo, R. K., Hwang, S. W., & Yeo, J. (2022, July). Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 2244-2257). - Lastly, there are some grammar typos that need to be corrected. For example, the caption of table 3 should not have "Language" capitalized. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness section for my questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have dedicated a section explicitly discussing the limitations of their proposed COLD framework and offering potential solutions. The in-depth discussion contributes significantly to the paper, enhancing its overall quality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed and insightful review, pointing towards some suitable directions to make our work more impactful. We would like to address the raised concerns below: * We want to mention that the human/expert annotations were done to construct the underlying causal graph, capturing the causal relationship between the events. Previously, works like Jin et. al have released causal queries (https://huggingface.co/datasets/causalnlp/corr2cause, https://huggingface.co/datasets/causalnlp/CLadder) generated from the underlying causal graph that does require causal inference theory understanding to be purely annotated by humans. Whereas in our work, the commonsense reasoning nature of the proposed framework provides the primary strength for bridging the gap between current causal reasoning benchmarks/datasets in NLP. Please also refer to our detailed comment on human evaluation in the response to Reviewer iBzV, where we explain the challenges in human evaluation. * The advantage of using a human crowdsource dataset is the marginalization that we obtain when the same activity is written by different humans. We would like to highlight that the primary strength of this work providing real-world grounding is coming from the observational graphs that are coming directly from humans (capturing the commonsense knowledge acquired by humans). When creating a benchmark or evaluation criteria, it becomes imperative to consider human knowledge. Using the constructed graphs, the proposed design scheme helps sample enormous causal queries that essentially facilitate large-scale causal benchmarking. * We thank you for pointing this out. We agree that it would be interesting to explore the fine-tuning over the created samples to observe the model’s behavior. We highlight the same in the Future directions of our work in lines 354-357, “In the future, it would be interesting to sample trajectories from the observational distribution to create a training dataset and check if causal reasoning ability can be acquired by language modeling objectives (including other variants like presented in Lampinen et al. (2023)).” Some of the prior arts like Corr2Cause (Jin et. al, 2024) have considered training/finetuning over the symbolic dataset and have observed no generalization, claiming LLMs fail to robustly acquire the causal reasoning skill in out-of-distribution settings. * We thank you for pointing out these interesting resources. In fact, we found the strategy formulated by us for zero-shot evaluation similar to the one used in the first resource (Ma et. al), the formulation used in equation 1 of Ma et. al's work [https://doi.org/10.1609/aaai.v35i15.17593 ] is similar to our formulation, the difference being the multiple numbers of tokens being used in their setup when compared to simple option id prediction in our formulation. The remaining two resources pointed out (Wang et. al and Kim et. al) are somewhat different and consider relying on the commonsense base (Knowledge Graphs), and would require more work to evaluate in our setting of causal reasoning. We agree that another set of baselines focusing on zero-shot commonsense question answering will be a good direction to explore in the future. * We thank you for pointing out the typos, we will fix them in the camera-ready version of the paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer LejL, Thanks again for helping review this paper! Since we are approaching the end of the author-reviewer discussion period, would you please check this author response regarding your concerns? We really appreciate it! Best, AC
Summary: This paper proposes causal reasoning in closed daily activities, which combines the causal reasoning works of open-ended causal reasoning via causal commonsense reasoning and symbolic representation-based question answering for theoretically backed-up analysis. By creating a dataset containing about 8 million queries, it can estimate the causal reasoning performance of pretrained language models. Moreover, the paper introduces theroies of causal inference (such as backdoor adjustment) to conduct in-depth analyses of causal reasoning in pretrained language model. Strengths: 1. The number of constructed dataset samples is large. 2. The usages of ATE and backdoor adjustment are novel 3. The analyses of causal reasoning in pretrained language models are thorough and in-depth. Weaknesses: 1. The paper is not well demonstrated, readers might be confused when reading sequentially 2. Just for the dataset itself, there is no differences between cold and copa. Even copa may have a better quality. 3. The dataset is limited to six scenarios, many of them are just paraphrases of the original ones. 4. Lack of human performance as a reference. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In lines 78-80, you give a event pair, which could be erroneously treated causal-related by humans. What is the differences between this event pair and the correct cause-effect pairs in your dataset? Most of the causal relations in your dataset are about causation of necessarity. 2. Even your dataset is in closed daily activities, the final version of the queries do not reflect this. 3. Why is the definit setting (Cold) bettern than the plausible setting (COPA)? I think causility is essentially a question of plausibility, there is no absolute causal relationship between two events. 4. How would humans perform on this dataset? Can humans obtain this answer by just look into the temporal relationship between events? How many queries can be answered correctly from a statistical perspective (not from the perspective of LMs)? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing your insightful comments. Please find the response to the clarifications below. * Given the wider scope of this work, we agree that the paper might have become too dense, affecting the presentation quality. We would love to hear some more feedback from you to incorporate in the final version of our paper to improve the presentation quality, focussing on the reading sequence for better comprehension. * We would like to respectfully disagree regarding the difference between COPA and the proposed dataset. COPA (with only 1000 samples), is a more open-ended dataset and the causal queries can be of any generic domain. However, the dataset proposed in our work has enormous causal queries (8.3M in total), covering all the aspects of a particular activity (i.e. not open-ended), that essentially facilitate a more rigorous analysis (coming close to simulating the mini-turing test) and hard to be beaten by just memorization. Thanks for pointing this out, the table shown in the paper was to help consider the similarity between the datasets, we will add a more detailed comparison stating the differences between the datasets in the camera-ready version of the paper. * We have already highlighted in the limitation section regarding the limited set of activities, and we agree that it would be good to explore more real-world activities in the future. However, the number of causal queries that can be sampled is enormous and is not exactly the paraphrases of each other. Every scenario is different and has a different set of texts describing the events in the particular activity. The paraphrased version refers to the same action/event described by different humans with varying granularity, that are written independently by the crowdsource workers. For example, when describing an event like “preheating oven”, some people will write “preheat oven to 350 degrees”, whereas some people will write it in detail, “preheat the oven by switching on the oven and change the temperature setting to required temperature”. We agree that using the term “paraphrased” for such cases may not be appropriate, we thank you for pointing this out, and we will update the details accordingly in the paper. These different versions of the same event described enhance the dataset quality by a significant margin, providing a much more robust evaluation of LLMs. In other benchmarks, keeping only one version often limits the rigorous evaluation. * Please refer to our detailed comment on human evaluation in the response to Reviewer iBzV. **Response to Question/Clarifications:** **Q1)** By examples provided in lines 78-80, we wanted to highlight the importance of counterfactual reasoning required to reason causally between the events, where one does not only need to consider the temporal order of events but also needs to think counterfactually by imagining an alternate universe with the same event not occurring will cause the other event to occur or not. In the example, the event “waiting at the luggage belt” may not occur even if the person has “boarded the flight” and “not checked in the luggage”, hence boarding the airplane will not be a direct cause of “waiting at the luggage belt”. This marks the third rung of the causal ladder proposed by Pearl et. al, which is also considered a core feature of intelligence (Penn and Povinelli, 2007; Pearl and Mackenzie, 2018) [lines 23-24]. **Q2)** The final version of the dataset assumes the activity to happen (also considered in the evaluation prompts as context). Moreover, the marginalization of pre and post-activity events makes it a closed system for rigorous analysis where all the unobserved variables (Figure 1, represented by U, i.e. intention to perform a task) take care of separating out the causal effect from events that may occur in rest of the world. This setup also provides adherence to SUTVA which is not possible in a dataset like COPA. Some of the prior arts (ROCK[https://arxiv.org/abs/2202.00436]) have also clearly highlighted this as a major issue in natural language-based approaches. We believe this work mitigates those issues, implicitly by design, making it a much more reliable framework for claims regarding casual reasoning abilities. **Q3)** COLD provides a closed setup that can act as a suitable testbench to perform a rigorous analysis, in comparison with COPA which is an open-ended causal reasoning benchmark with only 1000 samples. Moreover, the presence of an underlying causal graph in COLD helps facilitate enormous causal queries, making use of causal dependency concepts like d-separation. Regarding capturing the causal relationship between two events, COLD provides a medium to cover all three rungs of the causal ladder, 1) association 2) intervention, and 3) counterfactuals (Pearl et. al, 2018). The relationship does not only come from plausibility but also considers the occurrence of an event causing another event. For example “checking-in luggage” will directly cause events like “collect luggage from luggage belt” to occur. **Q4)** It is to be noted that temporal precedence is generally assumed essential for defining causation, and it is one of the most important clues that is used to distinguish causal from other types of associations [Mill, 1898, Hill, 1965, Pearl and Verma, 1995]. For our framework, we also consider the topologically sorted order obtained from the observational graphs and use the temporal order to define the causal query triplets, i.e., the cause events will always precede the effect events. We also perform a statistical perspective as suggested in the question without using any LMs. The first two rows of Table 4, show the results for the same with a detailed explanation for the statistical $\Delta$ computation provided in the appendix. Please refer to our detailed comment on human evaluation in the response to Reviewer iBzV. --- Rebuttal Comment 1.1: Title: Replying to the Rebuttal of Authors Comment: Thanks for your detailed response. Some main concerns still remain, so I intend to keep my rating unchanged: * For the causation of necessity, "boarding a plane" is not a cause of "waiting at the luggage belt", while for the causation of sufficiency, they have a causal relationship. So if you do not provide the event "not checking in luggage", you cannot say there is not a causal relationship. When compared to "checking in luggage", "checking in luggage" indeed has a larger probability of becoming a cause of "waiting at the luggage belt". If I provide an event such as "forgetting to collect the luggage", "fight is canceled", or "falling asleep and missed the flight", the causal relationship between "checking in luggage" and "waiting at the luggage belt" does not exist. I insist that causality is about probability if we cannot capture all confounders. * About the human evaluation: I still think human evaluation is important, even language models do not perform well in temporal settings. If humans can easily infer the correct answer by temporal order or necessary relationship between events, then it can be a shortcut to be utilized by language models. By the way, I do not see any human evaluation results in your response to reviewer iBzv. --- Reply to Comment 1.1.1: Comment: Dear Reviewer SJyu, Thank you for involving in the discussion. We agree that the event "checking in luggage" indeed has a larger probability (larger value of causal estimand/strength) of becoming a cause of "waiting at the luggage belt”. The underlying assumption made in the framework (also highlighted in Figure 1 of the paper) is that all the occurrence of events is confounded (caused indirectly/directly) by the event U (that also involves “intention to perform a task”). Given the nature of instructions provided to the crowd-sourced workers, saying “write the steps in the telegrammic style to perform the activity (flying in an airplane, in this case)”, the assumption becomes valid and the events such as "forgetting to collect the luggage", "fight is canceled", or "falling asleep and missed the flight" will not occur in that case. Hence our framework provides a closed system rather than an open system where all these events can take place, which also comes with the advantage of SUTVA being valid (missing from any previous works in NLP and CCR). We thank you for pointing this out, we agree that providing these explanations in detail will improve the presentation quality of our work and we will make suitable changes to the updated version of the paper. We would like to again reiterate that the condition of temporal precedence is necessary but not sufficient, and only provides a weak signal that helps in considering the causal relationships between the events. The temporal precedence only helps in providing a cue, i.e. the cause events will always precede the effect events. We also created another version of the dataset to match the findings from (Do et al., 2011) when doing human annotation of causal relationships. We mention briefly about it in the main paper with more details in the appendix. [Lines 257-259] ”We also experimented with another version of the dataset, where incorrect choice may correspond to temporally plausible but causally implausible events. The results drop significantly in this case, details and results are provided in App. F.1.” [Lines 929-937] “Some of the initial studies (Do et al., 2011) highlight the difficulty in choosing between the causal effect events and temporal events (that occur in close proximity to the premise event), i.e., temporal relationships are sometimes considered as a causal relationship by human annotators. We also create another version of created causal triplets where the wrong choices are replaced by temporally near nodes (nodes that are at a one-hop distance from the premise node). We call these ‘causally hard triplets.’ Note the temporal nodes are obtained from the observational graphs Go. Table 6 shows the performance comparison with causal triplets and causal-temporal triplets versions of the same queries. We observe a significant performance drop on the causal-temporal triplets version for most models, highlighting the increased confusion.” Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. EMNLP, pages 294–303, [Cited on 411 pages 4, 19, and 23.] We are sorry if the description of the Lack of Human Performance was not clearly highlighted in the response to Reviewer iBzV. We rewrite the same below: Lack of Human Performance: We would like to mention that validating human performance is a little challenging due to the nature of the causal reasoning task. The nature of counterfactual reasoning requires the human/algorithm to assume a similar alternate world/universe with only a particular happening or not happening to approximate the causal strength between the events. These imaginations can be expressed in statements as highlighted by Pearl et. al, containing an “if” statement in which the “if” portion is untrue or unrealized, which is known as a counterfactual. The “if” portion of a counterfactual is called the hypothetical condition, or more often, the antecedent, making it challenging (cognitively heavy) to conduct a human evaluation. We agree that human performance may be lower than complete perfection and adding human performance in these tables will make the comparisons more informative. However, the true dependency of the events is coming from the underlying causal graph, making the generated causal queries accurate. Previously, works like Jin et. al have released causal queries (https://huggingface.co/datasets/causalnlp/corr2cause, https://huggingface.co/datasets/causalnlp/CLadder) generated from the underlying causal graph that does require causal inference theory understanding to be purely annotated by humans. Reviewer iBzV had similar concerns, who acknowledged the difficulty of performing human evaluations and consequently have increased their score! We are grateful for your invaluable comments, and considering your comments will definitely improve the presentation quality of our work. Please let us know If you have any further questions or require additional clarification.
Summary: The paper proposes a dataset for evaluating the causal reasoning capabilities of LLMs by grounding evaluation in human understanding of real-world daily activities. The authors address the gap between open-ended causal commonsense reasoning and symbolic question answering by introducing the COLD framework, which generates causal queries based on daily activities. The paper describes the creation of causal graphs through crowd-sourcing. The authors test different LLMs using these causal queries, and show that even simple causal reasoning tasks remain challenging. Strengths: 1. Its approach to evaluating causal reasoning in LLMs by grounding the evaluation in real-world activities. 2. The use of a large dataset to create extensive causal queries. 3. The detailed experimentation with various open-source LLMs and the plan to publicly release the framework and datasets. Weaknesses: 1. The data used in the study appears overly simplistic, focusing on basic daily activities which may not provide a robust test for causal reasoning. If the goal is to evaluate how well models understand causal reasoning I would stay close to how we perform causal inference in science (i.e. gather data, understand whether we have an identification strategy, compute treatment effects/learn causal graph). 2. Causal Commonsense Reasoning is not causal in the sense of statistical causality. Almost all uses of causality in science are either: 1. To estimate a treatment effect in the real-world 2. To discover a causal graph, again in the real-world. This is done either from interventional or from observational data. This data doesn’t capture any of this, so I wonder what the usefulness of it is in relation to the causality literature. 3. The insights drawn from the experiments are not clearly articulated. What exactly has been learned from this new extensive dataset and all of the experiments? The ATE experiments are particularly hard to understand. ATE measures the marginal effect of a treatment on an outcome. Comparatively, this dataset includes binary questions about causal triplets. In the results the authors show accuracy measures, I don’t understand how to interpret this as the ability to properly estimate the effect, and I wasn’t able to understand how to do so from the authors’ description. 4. The examples provided (e.g. Figure 3) often lack clarity, as both options seem plausible effects without additional context or a causal graph. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What specific insights have you gained from your experiments with this dataset? Can you elaborate on how these insights contribute to our understanding of the causal reasoning capabilities of LLMs? Could we use it to propose any improvements to the current models? 2. Are there plans to incorporate more traditional causal inference use-cases, such as treatment effect estimation or causal graph discovery, into your framework? I mean this in the sense of evaluating the capabilities of LLM in performing these tasks. 3. In the first example in Figure 3, both options appear plausible without additional context. Can you provide a detailed explanation of how the correct choice is determined in such cases, and whether there is a mechanism to ensure that the queries are unambiguous? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors acknowledge limitations of their work, such as the restricted set of activities and the challenge of creating causal graphs for more complex long-term tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing out some of the important directions. * The scope of this paper was limited to commonsense knowledge and we agree that the data used is simplistic in nature (easily to reason about by humans), and does not deal with events beyond commonsense knowledge. We also highlight the same in lines 72-75 of the paper, that CCR excludes the causal events that are beyond the reach of commonsense knowledge, for example, does “planting trees” have a direct impact on the “rainy season”? or does “providing free education” improve the “economic condition of the country/state”; does “carpooling” directly impact “air pollution”, etc. It is to be noted that the primary goal of this work is to bridge the gap between real-world causal reasoning performed by humans on a daily basis and provide the underlying causal graph for rigorous analysis as done in symbolic approaches. Performing causal reasoning on a broader scale that requires a Randomized control trial (as suggested in the weakness) is challenging for humans and is highly dependent on the methodology rather than reasoning/intelligence. * Some recent works like ROCK [https://arxiv.org/abs/2202.00436], show causal commonsense reasoning through the perspective of causal theory. Our work provides a medium to integrate the causal theory perspective (previously used by papers like Corr2Cause and Cladder for symbolic queries that are verbalized in natural language prompts to evaluate LLMs) and causal commonsense reasoning. We believe this work will be the first to consider both and will facilitate research in making LLMs more causal in reasoning. * We are sorry for the unclear explanation provided in the paper. We would like to clarify that prior arts on similar lines (ROCK[https://arxiv.org/abs/2202.00436], COLA[https://aclanthology.org/2023.acl-long.288/]) have made use of ATE estimations to perform a zero-shot evaluation using language models like RoBERTa on datasets like COPA. We followed a similar perspective to estimate ATE between the events that are further used to perform classification using the obtained Delta estimates, i.e. given two options, if the ATE of option1 is greater than ATE of option2, the causal prediction is made as option1, which is further used to compute the accuracy over the dataset. Algorithm 2 uses the causal estimands $\Delta$ to compare the causal strength between the premise event and the choice events. We consider the causal estimand computed between the premise and the available set of choices and predict the label corresponding to the high $\Delta$ values. For a given causal query from the created causal query triplet dataset $\mathcal{D}$, where each data entry $\mathcal{D}\_i$ corresponds to $(p, c_1, c_2, q, l)$, i.e., premise event, choice 1, choice 2, question and the label respectively. As the task is to predict the more plausible cause/effect of the given premise event, we create two event pairs, $(p, c_1)$ and $(p, c_2)$, and compute the causal estimand $\Delta$ for both the pairs using the temporal or the backdoor scheme (described in Algorithm 3. Note that the order of events given to $\Delta_{\mathcal{M}}$ is in $E_1$ and $E_2$ format, i.e. $\Delta_{\mathcal{M}}(E_1, E_2)$. Using the temporal precedence (highlighted as remark above), the cause event will always precede the effect event temporally. Hence, for a causal query with the question as 'cause', the causal estimand is estimated as $\Delta_{\mathcal{M}}(c_1^i, p^i)$, $\Delta_{\mathcal{M}}(c_2^i, p^i)$ and $\Delta_{\mathcal{M}}(p^i, c_1^i)$, $\Delta_{\mathcal{M}}(p^i, c_2^i)$ when the question is 'effect.' Further, based on the estimated $\Delta_{\mathcal{M}}$ scores, the more plausible cause/effect is predicted. * The examples provided in Figure 3 include Premise: go to store and buy cake mix. Question: Which of the following is an effect? Choice 1: come home with the ingredients. Choice 2: go to kitchen. In the given example, using counterfactual reasoning, we can say that “going to the kitchen” is possible without going to the market (if the ingredients are already available), however when going to the market, the event of “come home with the ingredients.” will always take place, making it a more plausible effect in the given choices. Similarly, example 2 in the figure is also explainable as going to market has no direct relation with heating the oven. Thank you for pointing this out, we will add a more detailed explanation for better clarity of the presented idea. **Response to Questions/Clarifications:** **Q1)** The primary insight as shown from the results is the LLMs lack causal commonsense reasoning abilities in general. Moreover, to provide a suitable test for the model’s understanding, we also apply the backdoor criterion and validate if it improves the performance. Interestingly, we observe that applying the backdoor criterion does help improve the performance over the simple temporal prediction. As highlighted in Table 4, we do see a significant boost in the performance when applying the backdoor adjustments. We believe this strengthens the claim of using the causal theory perspective to improve the performance of LLMs as causal reasoners. **Q2)** We completely agree that this framework can be extended to perform various other evaluations like treatment effect estimation or causal graph discovery, and incorporating these will enhance the evaluation. However, given the density of the paper, we believe these go beyond the scope of this work. We provide a few future directions on similar lines in the main paper. We thank you for pointing out some additional directions, we’ll add these to the camera-ready version of the paper. **Q3)** Please refer to the response to the last point of weakness. We hope considering the counterfactual situation will make the causal query justified. We thank you for pointing this out, we will improve the writing to explain these examples in the camera-ready version of the paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer G7bs, Thanks again for helping review this paper! Since we are approaching the end of the author-reviewer discussion period, would you please check this author response regarding your concerns? We really appreciate it! Best, AC
Summary: This paper presents a new causal reasoning dataset for LLMs. The main motivation of the dataset is to bridge the gap between casual commonsense reasoning datasets and symbolic representation-based causal reasoning datasets. Specifically, the proposed dataset collects crowd-sourcing observations, and constructs related causal graphs and triplets. This paper evaluates multiple open-source LLMs using the collected datasets, benchmarking LLM's ability to predict causal relationships, and to estimate average treatment effect. Strengths: 1. The construction of the causal graph and the estimation of casual relationships are strict and sound. The whole dataset provides high-quality causal relationship annotations for real-life events. 2. A large number of open-source models are evaluated. The evaluation on ATE estimation also involves multiple estimation methods using LLMs. Weaknesses: 1. Human performance is not provided in the empirical comparisons. This makes it hard to understand LLM's performances. One can argue that the performances are far from perfect. However, due to the inherent ambiguity of natural languages, and inherent disagreement in people's opinions, I would assume human performance will also be significantly lower than 100%. Adding human performance in these tables will make the comparisons more informative. 2. Many important details are in the appendix. I understand this is a dense paper with lots of content, however, the presentation and organization can be greatly improved. Additionally, the current results section includes a significant part on how to estimate ATE, which should be included in previous sections. 3. While the dataset itself is huge, there are only five different events (shown in Table 2). Therefore, it is unclear how representative the model's performance on this dataset will be. This is a significant limitation, especially due to the "Causal Parrots" phenomenon mentioned in the introduction. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How sensitive are the eval results w.r.t. the prompts? I'm a bit concerned that since "cause" and "effect" are not common words (especially their formal definition in causal inference), the model's ability may be underestimated with these prompts. 2. There are multiple different methods to estimate probability prediction from LLMs? Have you tried other methods and will that change the empirical results significantly? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are discussed in Sec. 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed and insightful response. Please find the response to the mentioned weakness/clarifications below: * **Lack of Human Performance:** We would like to mention that validating human performance is a little challenging due to the nature of the causal reasoning task. The nature of counterfactual reasoning requires the human/algorithm to assume a similar alternate world/universe with only a particular happening or not happening to approximate the causal strength between the events. These imaginations can be expressed in statements as highlighted by Pearl et. al, containing an “if” statement in which the “if” portion is untrue or unrealized, which is known as a counterfactual. The “if” portion of a counterfactual is called the hypothetical condition, or more often, the antecedent, making it challenging (cognitively heavy) to conduct a human evaluation. We agree that human performance may be lower than complete perfection and adding human performance in these tables will make the comparisons more informative. However, the true dependency of the events is coming from the underlying causal graph, making the generated causal queries accurate. Previously, works like Jin et. al have released causal queries (https://huggingface.co/datasets/causalnlp/corr2cause, https://huggingface.co/datasets/causalnlp/CLadder) generated from the underlying causal graph that does require causal inference theory understanding to be purely annotated by humans. * **Presentation and Organization of the paper:** We agree that the density of contents in the paper is a little high, and the presentation can be improved by a significant margin. We agree that it would be better to keep the estimation of ATE as a separate section, however, due to limited space, we had to merge the experiments and the results section (Section 4 Experiments and Results). We hope you understand that fitting all the details in the main paper is challenging, we would love to have some feedback if you think any particular section would be better suitable to move in the main paper from the appendix. We will make the suggested changes in the camera-ready version of the paper. * We agree that one of the limitations of our work is the limited set of activities, we also highlight the same in the limitation section of our paper with suitable reasons in [Line 349-355]. We would like to reiterate that finding general commonsense reasoning aactivities/tasks that are well understood by humans remains challenging in general and it would be good to explore in the future by adding more set of real-world activities. Moreover, creating a causal graph for an activity increases as we move toward more long-term tasks. However, as a general test of causal intelligence, our framework provides a suitable platform to validate the reasoning capabilities more rigorously, which is missing from the current literature. We hope, with the enormous causal queries (simulating/coming close to the mini-turing test), the created dataset will serve as a robust testbench for evaluating causal reasoning in LLMs. **Response to Questions/Clarifications:** **Q1)** In general, LLMs are found to be sensitive toward prompts, and coming up with a suitable prompt may affect the performance. Previously, various LLM benchmarking papers have performed evaluation over causal datasets like COPA, CausalBench, etc. LLMs are few-shot learners ROCK has benchmarked LLMs using the cloze evaluation or the MCQ evaluation. We consider the latter that was also justified for benchmarking commonsense reasoning activities in general. **Q2)** We agree that there are multiple ways of estimating probability predictions. Some of the widely accepted ones include the CLOZE test evaluation and the MCQ-based evaluation. As highlighted in the paper, we use multi-choice question-answering (MCQA) [Robinson and Wingate, 2023]. Robinson and Wingate [2023] highlight the advantages of MCQ-based evaluation over cloze evaluation [Brown et al., 2020] (where the LLMs are expected to generate the entire answer in a cloze test), leading to a significant boost in various tasks, including commonsense-based tasks. We agree that there is a risk of estimating bias in probability that may result in variance in prediction results. To eliminate the bias in the probability predictions, we consider the averaging (similar to https://openreview.net/forum?id=shr9PXz7T0) that uses multiple permutations to balance out the prediction evaluation. Algorithm 3 in the Appendix depicts the process of computing an unbiased estimate for the causal estimand. The causal strength is computed between two events $E_1$ and $E_2$ where $E_1$ is assumed to be preceding $E_2$ temporally. To make an unbiased estimate based on the provided options, we consider normalizing the obtained probability scores by flipping the options and providing the same query prompt to the Language Model. $$ f_M(E_1,E_2,\phi) \leftarrow \frac{s_M(E_1,E_2,\phi)+s_M(E_1,E_2,\phi_f)}{s_M(E_1,E_2,\phi)+s_M(E_1,E_2,\phi_f)+\tilde{s}_M(E_1,E_2,\phi)+\tilde{s}_M(E_1,E_2,\phi_f)} $$ where $\phi$ denotes the prompt template as shown in Figure 8 (top) and $\phi_f$ denotes the same prompt with flipped options, Figure 8 (bottom). The overall equation helps normalize the prediction probabilities of the 'Increase' option by using the probabilities of the 'Decrease' option. Finally, these normalized scores are computed for multiple trajectories $t_i$ in the backdoor adjustment scheme to compute the causal estimands $p_\mathcal{M}(E2 \mid do(E_1))$ and $p_\mathcal{M}(E2 \mid do(\neg E_1))$ that help estimate the causal strength $\Delta_{\mathcal{M}}$ between the events $E_1$ and $E_2$. --- Rebuttal Comment 1.1: Comment: Dear Reviewer iBzV, Thanks again for helping review this paper! Since we are approaching the end of the author-reviewer discussion period, would you please check this author response regarding your concerns? We really appreciate it! Best, AC --- Rebuttal Comment 1.2: Comment: Thank you for the response. It addresses some of my concern around prompt sensitivity and uncertainty estimation, hence I increased my score. I still believe adding human annotation is valuable for this work, but I also understand its difficulty now.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed reviews and suggestions. We are pleased that the reviewers found our work **novel, helping bridge the gap between open-ended causal reasoning and symbolic representation-based question answering (Reviewer iBzV, Reviewer LejL)**. We are also happy that the design of the proposed framework in the construction of the causal graph is found to be **strict and sound (Reviewer iBzV)**, laying **a solid foundation (Reviewer LejL)**. Moreover, in the analysis part, the reviewers found **the proposed backdoor adjustment and the ATE construction unique and novel (Reviewer LejL, Reviewer SJyu)** providing valuable insights into causal strength between events (Reviewer LejL) and the **supporting experiments to be detailed and thorough (Reviewer G7bs, Reviewer SJyu)**. Finally, on the presentation quality, we are happy that the reviewer found the paper **“well-written, with clear explanations, logical flow, and concise language, ensuring effective communication of key points” (Reviewer LejL).** There are some concerns raised regarding the high density of the paper with lots of contents (Reviewer iBzV) and the ATE experiments hard to comprehend (Reviewer G7bs) in the first iteration. We hope incorporating the suggestions made by the reviewers, will help enhance the presentation quality of our work, making it easier for readers to comprehend in the first iteration.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers
Accept (poster)
Summary: This paper studies the convergence rate of a variant of the boosting algorithm AdaBoost.MH, which is named factorized AdaBoost.MH. Factorized AdaBoost.MH replaces the one-against-all base classifier by the factorized base classifier which consists of a binary and a vote vector. This paper solves an open problem proposed in COLT 2014 and provide a convergence rate for factorized AdaBoost.MH. Strengths: - This paper solves an open problem proposed in COLT 2014, which has been open for about 10 years. The proof is clear and the basic ideas behand the proofs are given. - This paper is well-written and easy to read. - The background is clearly introduced, which is friendly to new readers. Weaknesses: - A minor issue: part of the presentations are similar to that in [1]. [1] Balázs Kégl. Open problem: A (missing) boosting-type convergence result for adaboost.mh with factorized multi-class classifiers. In Maria-Florina Balcan, Vitaly Feldman, and Csaba Szepesvári, editors, COLT, volume 35, pages 1268–1275, 2014. Technical Quality: 4 Clarity: 3 Questions for Authors: - In equation (5) of [1], the vector $\mathbf{v}$ is not assumed to belong to $\\{ -1,+1 \\}^K$, while this paper makes such an assumption. Is the setting in your paper completely the same as that in [1]? [1] Balázs Kégl. Open problem: A (missing) boosting-type convergence result for adaboost.mh with factorized multi-class classifiers. In Maria-Florina Balcan, Vitaly Feldman, and Csaba Szepesvári, editors, COLT, volume 35, pages 1268–1275, 2014. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition. We are glad you like our paper. **For the weakness:** Thank you for your reminder. We use the presentations that are similar to that in [1] in order to make them consistent and make it convenient for the readers that have read [1] before to understand our paper. We would carefully check the presentations of our paper and change the unneccessary ones which are similar to [1]. **For the question:** In fact, the vector $\mathbf{v}$ in [1] is also assumed to belong to $\\{ -1,+1 \\}^K$. The authors of [1] forget to mention it after equation (5). According to the equation (12) in [1], we know that the vector $\mathbf{v}$ considered in [1] in fact belongs to $\\{ -1,+1 \\}^K$. ### References [1] Balázs Kégl. Open problem: A (missing) boosting-type convergence result for adaboost.mh with factorized multi-class classifiers. In Maria-Florina Balcan, Vitaly Feldman, and Csaba Szepesvári, editors, COLT, volume 35, pages 1268–1275, 2014. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the effort in addressing my questions. I would like to keep my original recommendations. --- Reply to Comment 1.1.1: Comment: Thank you for your reply, we are glad that you are satisfied with our answers.
Summary: The paper addresses an open question posed by Kegl, 2014 regarding the convergence properties of AdaBoost.MH, a boosting algorithm designed for multi-class classification problems. Specifically, Kegl, 2014 noted that it is challenging to prove the convergence of this algorithm due to the weighted sum of binary classifications being less than one, thereby requiring a uniform lower bound on the weights. The authors of the current paper provide a solution to this open problem by establishing two convergence results: one dependent on the number of instances (n) and another independent of n, which is based on the number of classes (K). Strengths: 1. The authors derive two upper bounds for the algorithm's convergence: one that depends on the number of instances (n) and another that relies solely on the number of classes (K). While the first bound may be less relevant due to its dependence on n, which can grow arbitrarily large, the second bound offers a more useful guarantee of convergence. 2. The authors further demonstrate that the lower bound on w'_{\Sigma} (the weighted sum) can become arbitrarily small only in the limiting case where both the number of instances and classes approach infinity simultaneously. Weaknesses: 1.While the authors successfully address an open question in the field, I find it challenging to discern the significance of their findings beyond mere academic curiosity. As a consequence, it is unclear to me how this result contributes meaningfully to the broader machine learning community. To strengthen the case for this paper, I believe it would be beneficial for the authors to explicitly highlight the potential practical implications or applications of their convergence results, thus clarifying their relevance and value to the field. 2. Building upon my previous observation, I would like to emphasize the importance of empirical validation. While the theoretical convergence results are certainly valuable, their practical significance is amplified by experimental evidence demonstrating the correctness and effectiveness of these findings. To further strengthen the paper, I suggest that the authors include some concrete experimental results or simulations that illustrate the applicability of their proved bounds in real-world scenarios. 3. I found the writing style in the paper to be somewhat challenging to follow. The use of symbols like w'_{\Sigma} in the early sections can make it difficult for readers to quickly grasp the main ideas. Additionally, I noticed that Section 2.2 primarily recapitulates the Kegl, 2014 paper without adding significant value. Instead, I would suggest that the authors condense this information into a concise summary, allowing them to focus on presenting their own original contributions. By streamlining the writing and incorporating a clear summary of relevant background material, the authors can improve the overall readability and flow of the manuscript. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. I see that the result of Remark 2 still has ln(2n(K-1)) in the convergence result of T^{*}. It is definitely much better than the result of Remark 1, but I am curious regarding why the authors call this particular result independent of n. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions. We now provide explanations about the issues in weaknesses and questions. ### **For weaknesses.** **For the significance of our work.** Firstly, we claim that the main contribution of our work is solving an open problem in [1] and providing a convergence rate for the factorized AdaBoost.MH algorithm, which is thought to be meaningful by the Reviewers BMi6, jA8z, and HyZV. Since this paper focuses on the theoretical perspective, we next introduce the importance of boosting algorithms from the theoretical perspective. Boosting is a very important part of statistical learning theory and AdaBoost.MH is an important algorithm in multiclass boosting. Boosting algorithms are basic but important tools in statistical learning theory, for example, in the PAC learning framework, many algorithms are constructed by boosting algorithms [2-4]. Furthermore, the convergence rate of the boosting algorithm usually affects the sample complexity of the proposed algorithm, i.e., the sample complexity of the constructed algorithms usually depends on the value $T^*$. So we believe our convergence rate can be helpful in the field of statistical learning theory. **Regarding the experimental validation.** As stated above, boosting algorithms are very useful in statistical learning theory, so our results are surely valuable, especially in applications to statistical learning theory. Since our paper is **purely theoretical**, we focus on the theoretical perspectives of the factorized AdaBoost.MH algorithm. So the experiments were beyond the scope of this work. For some results about the effectiveness of the factorized AdaBoost.MH algorithm, please refer to [5]. **For the writing issues.** We are sorry for confusing you. In the early sections, we use the undefined symbol $w_ {\Sigma}^\prime$ to introduce the main concerns of [1], it is truly not suitable. In fact, the ultimate concern of [1] is a convergence rate for the factorized AdaBoost.MH algorithm, we are glad to restate the concern of [1] as an upper bound for the convergence rate rather than a lower bound for $w_ {\Sigma}^\prime$, and then the $w_ {\Sigma}^\prime$ term will not appear before its definition. Since our aim is to study the convergence rate for the factorized AdaBoost.MH algorithm, it is important to provide an overview of factorized AdaBoost.MH algorithm. We recapitulate [1] in section 2.2 to provide a complete introduction to the problem that we solve. It is a good idea to condense this information into a concise summary, we are glad to do it in our revision. ### **For questions.** We are sorry for confusing you. In fact, we refer to our second result as being independent of $n$ because we are talking about $w_ {\Sigma}^\prime$ (which is focused on in [1]) rather than $T^*$. In fact, the $n$ term in $T^*$ is unavoidable since we need to set the upper bound of the exponential loss to be less than $\frac{1}{2n(K-1)}$ (as done in the proof of Corollary 3.5). We will clarify it to remove ambiguity in our revision. ### **References** [1] Open problem: A (missing) boosting-type convergence result for adaboost.mh with factorized multi-class classifiers. [2] A theory of PAC learnability of partial concept classes. [3] A Characterization of Multiclass Learnability. [4] VC Classes are Adversarially Robustly Learnable, but Only Improperly. [5] The return of AdaBoost.MH: multi-class Hamming trees. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: I thank the authors for going through the review and responding to my questions. As pointed out, I am clearly the outlier in this reviewer group. I see that reviewer iRY2 also initially shared my main concern and the conversation there was very helpful. However, I feel that this significance needs to be properly mentioned in the manuscript itself which has not been done in its current format. The explanation regarding experimental validation can be defended, but some experiments should definitely elevate the quality of this paper in my opinion. After going through all the reviews and discussions, I have decided to improve my scores slightly. --- Rebuttal 2: Title: Request for feedbacks Comment: Dear Reviewer, Thanks for reviewing our submission. We have turned in our responses during the rebuttal period. Since it has been a few days after the beginning of the discussion period, we sincerely request for some feedback about whether we have solved your problems.
Summary: The paper solves a COLT 2014 open problem. The investigated problem is about presenting a convergence rate of the factorized AdaBoost.MH algorithm, which is based on AdaBoost.MH and aims to boost weak classifiers to a strong classifier in the multiclass setting. The paper shows two lower bounds (one depends on the sample size $n$ and another depends on the class number $K$) for the important term $\omega_\Sigma$, which leads to two convergence rates for the factorized AdaBoost.MH algorithm. Strengths: **Originality.** The proofs of this paper are based on basic algebraic and probabilistic tools, so I believe the paper is original. **Quality.** This is a high-quality paper. It is well-written and solves a COLT 2014 open problem. The proofs are sound and detailed, and the solution is elegant. **Clarity.** The writting of this paper is good. The definitions and notations are very clear, and the conditions of the theorems and corollaries are clearly stated. **Significance.** A long-playing open problem is beautifully solved by the paper. The paper provide a convergence rate for the effecient factorized AdaBoost.MH algorithm. So I think the paper is of great significance. Weaknesses: Firstly, the proof is a little easy compared with the solution of other open problems. However, it may not be seem as a weakness since complicated proofs are not neccessarily better. Secondly, I think it would be better to provide some high level intuitions about idea to solve the open problem. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: No. The theoretical results are only limited to the factorized AdaBoost.MH algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition. We are glad you like our paper. **For the first weakness:** As you say, complicated proofs are not neccessarily better. We believe that it is great to solve an open problem through a easy method. **For the second weakness:** Thank you for your reminder. The main idea of our proof is to take advantage of the properties of the terms $\mathbf{W, Y}$ and $\mathbf{v}$. The most intuitive idea is to use the property that the sum of the elements in $\mathbf{W}$ is $1$. So, in the proof, we try to get the sum of the elements in $\mathbf{W}$ or other terms that is related to the sum of the elements in $\mathbf{W}$. - In Theorem 3.3, we utilize the relationship between norms to get a lower bound that involves the maximum row sum of $\mathbf{W}$. Fortunately, since the sum of $\mathbf{W}$ equals $1$, we know that the maximum row sum of $\mathbf{W}$ is not less than $\frac{1}{n}$. - In Theorem 3.4, we consider the average performance of some different code vectors rather than the worst performance of all possible code vectors. We take $\mathbf{v}$ to be drawn from a uniform distribution on the binary cube $\\{ -1, +1 \\}^K$. And fortunately we find that the average performance can be deduced to the sum of the elements in $\mathbf{W}$ according to the Khintchine inequality and Jensen's inequality. --- Rebuttal Comment 1.1: Title: Keep the scores Comment: I have read the response letter. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We are glad you like our paper.
Summary: This paper resolves an open problem pointed out by [7]. The open problem involves providing a lower bound, which is independent of $n$, for the coefficient $w'_{\Sigma}$ of the weighted multi-class exponential margin-based error in the factorized version of ADABOOST.MH. This result demonstrates that if the $\delta'$-weak learning condition is satisfied, the aforementioned error can be reduced to 0 by adding weak learners. The key points of the proof are as follows: * Reformulating the proposition to be proved as a lower bound evaluation of a minimax problem. * Treating the code $v$ as a Rademacher random variable, bounding the maximum value from below by the expectation, and then applying Khintchine inequality. Strengths: The paper successfully resolves the open problem highlighted by [7]. Weaknesses: It seems the paper does not sufficiently explain the significance of resolving the open problem pointed out by [7]. Technical Quality: 3 Clarity: 3 Questions for Authors: * Is it possible to confirm the $\delta'$-weak learning condition is satisfied for a real-world dataset? * If so, can we confirm how quickly the weighted multi-class exponential margin-based error decreases by numerical experiments? (Can we verify how tight $T^*$ is in practice by experiments?) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: One limitation of this study appears to be the $\delta'$-weak learning condition. It would have been beneficial if the paper had provided examples or explanations to illustrate how strong this condition is for real-world datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions. We now provide explanations about the issues in weaknesses and questions. ### **For the weakness: significance of the work.** The main contribution of our work is solving an open problem in [1] and providing a convergence rate for the factorized AdaBoost.MH algorithm, which is thought to be meaningful by the Reviewers BMi6, jA8z, and HyZV. Boosting is a very important part of statistical learning theory and AdaBoost.MH is an important algorithm in multiclass boosting. Boosting algorithms are basic but important tools in statistical learning theory, for example, in the PAC learning framework, many algorithms are constructed by boosting algorithms [2-4]. Furthermore, the **convergence rate** of the boosting algorithm usually affects the sample complexity of the proposed algorithm, i.e., the sample complexity of the constructed algorithms usually depends on the value $T^*$. So we believe our convergence rate can be helpful in the field of statistical learning theory. ### **For the questions.** Firstly, we should emphasize that the $\delta$-weak learning condition is the most basic and commonly used assumption in boosting [5]. From the above answers, we know that boosting algorithms are usually used in statistical learning theory. In such applications [2-4], they try to construct a weak learner based on some known algorithms and then apply boosting results. In such cases, the weak learners are easily to be shown to satisfy the $\delta$-weak learning condition. So, from the theoretical perspective, we can easily construct weak learners that satisfy the $\delta$-weak learning condition based on known learning algorithms, which means that **we do not need to worry about whether the $\delta$-weak learning condition holds when applying our convergence results to statistical learning theory problems**. To the best of our knowledge, there is no work focusing on validating the $\delta$-weak learning condition for real-world datasets. The validation of such a condition depends on a lot of factors, for example, it depends on the hypothesis class chosen for the weak learner, and the examples that we sample. Suppose $\gamma(\mathbf{w}, \varphi)$ is the margin of $\varphi$ under weight vector $\mathbf{w}$, and $\mathcal{H}$ be the hypothesis class used for the weak learner. By definition 2.1, to validate the empirically $\delta$-weak learning condition, we need to calculate $\underset{\mathbf{w} \in \Delta^{m-1}}{\min} \underset{\varphi \in \mathcal{H}}{\min} \gamma(\mathbf{w}, \varphi)$ and set it to be the value of $\delta$. It is obvious that calculating $\delta$ involves taking the minimum over $\mathcal{H}$ and $\Delta^{m-1}$, which is possible but generally computationally intractable. It's an interesting topic to validate the $\delta$-weak learning condition and confirm how quickly the weighted multi-class exponential margin-based error decreases by numerical experiments. However, our paper is purely theoretical, we focus on the theoretical perspectives of the factorized AdaBoost.MH algorithm. So the experiments are beyond the scope of this work. ### **References** [1] Open problem: A (missing) boosting-type convergence result for adaboost.mh with factorized multi-class classifiers. [2] A theory of PAC learnability of partial concept classes. [3] A Characterization of Multiclass Learnability. [4] VC Classes are Adversarially Robustly Learnable, but Only Improperly. [5] Boosting : Foundations and Algorithms. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding the points I raised in my questions, I was considering the possibility that adding experimental evaluations could increase the value of this paper. As Reviewer g6ZT mentioned, I also believe that even for theoretical research, plotting the obtained upper and lower bounds for several examples can sometimes provide insights. However, I don't intend to lower my evaluation of this paper due to the lack of experimental evaluations and your response is sufficient to me. Thank you also for your explanation about the importance of the research. I can certainly understand that boosting is a very important technique, but could you elaborate further on your thoughts about the positioning and importance of AdaBoost.MH within the boosting methods for multi-class classification? For example, [1] states that AdaBoost.MH was a state-of-the-art method as of 2014, which seems to have been one of the significances of solving this problem. (I'm not arguing that only state-of-the-art methods are important subjects for theoretical analysis. I'm just citing this as an example of one way to persuade readers of a problem's importance.) In contrast, this paper introduces AdaBoost.M1, AdaBoost.M2, AdaBoost.MH, and AdaBoost.MH with multi-class Hamming trees. However, although it mentions that AdaBoost.MH was improved by multi-class Hamming trees, there isn't much discussion about the performance of each method. Also, all the research mentioned in Related Works is by the same author. Are there no other studies applying boosting to multi-class classification problems? For instance, the AdaBoostClassifier in scikit-learn, a widely used machine learning library, adopts the method from [2], which I believe is not cited in this paper. Within these various boosting methods for multi-class classification, how is AdaBoost.MH positioned practically, theoretically, or historically, and why is it important? I think providing this kind of background could potentially appeal to a wider audience about the value of this important research. [1] B. Kégl, Open Problem: A (Missing) Boosting-type Convergence Result for AdaBoost.MH with Factorized Multi-class Classifiers, Proceedings of The 27th Conference on Learning Theory, PMLR 35:1268-1275, 2014. [2] J. Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class adaboost.” Statistics and its Interface 2.3 (2009): 349-360. --- Reply to Comment 1.1.1: Comment: Thank you for your reminder, we are glad to add more discussions on the importance of AdaBoost.MH in our revision. In binary classification, AdaBoost [1] is one of the most famous and influential algorithms among all the binary boosting algorithms. The paper [1], which proposes AdaBoost, has more than 22000 citations. Since the proposal of AdaBoost, there are many works trying to extend the boosting framework to multi-class classification problems. Most multi-class boosting algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems, among which the most famous and influential one is AdaBoost.MH [2]. The paper [2], which proposes AdaBoost.MH, has more than 4900 citations. Moreover, AdaBoost.MH has inspired the proposal of many other multi-class boosting algorithms. For example, inspired by the characteristics of AdaBoost.MH that reduces the multi-class classification problem to multiple two-class problems, [3] chooses another line of thought to develop an algorithm that directly extends the AdaBoost algorithm to the multi-class case without reducing it to multiple two-class problems; [4] demonstrates how to improve the efficiency and effectiveness of AdaBoost.MH and proposes the algorithm LDA-AdaBoost.MH; [5] proposes an efficient multi-calss fault diagnosis approach based on the AdaBoost.MH algorithm; [6] proposes a method for ranking based on AdaBoost.MH. There are also many other works based on AdaBoost.MH [7-9]. Furthermore, to the best of our knowledge, many works (for example, [3, 9-11]) use AdaBoost.MH as the baseline, which further shows the importance of AdaBoost.MH. For example, the only baseline used in [3] is AdaBoost.MH. In summary, AdaBoost.MH serves as a link between binary classification boosting algorithms and multi-class classification boosting algorithms, the cornerstone of multi-class boosting, and has a big influence on the multi-class boosting field. From the application perspective, AdaBoost.MH shows influence on ranking [6], computer vision [10], natural language processing [4, 9]. multimedia [8]. Moreover, AdaBoost.MH is also realized by a boosting package MultiBoost [12]. We are glad to add the above discussions in our revision. If you have further questions, please fell free to raise them and we are glad to discuss with you. ### **References** [1] A decision theoretic generalization of on-line learning and an application to boosting. [2] Improved Boosting Algorithms Using Confidence-rated Predictions. [3] Multi-class AdaBoost. [4] LDA-AdaBoost.MH: Accelerated AdaBoost.MH based on latent Dirichlet allocation for text categorization. [5] An Effective Fault Diagnosis Approach Based On Gentle AdaBoost and AdaBoost.MH. [6] Ranking by calibrated AdaBoost. [7] An improved boosting algorithm and its application to text categorization. [8] Generalized multiclass adaboost and its applications to multimedia classification. [9] MP-Boost: A Multiple-Pivot Boosting Algorithm and Its Application to Text Categorization. [10] Improved multiclass AdaBoost for image classification: The role of tree optimization. [11] A New Multiclass Generalization of AdaBoost. [12] MultiBoost: a multi-purpose boosting package.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: Balázs presented a factorized version of ADABOOST.MH and empirically show that this extension achieves promising results. Balázs further raises the convergence property of factorized ADABOOST.MH as an open problem. This submission addresses this open problem and presents an elegant proof. Strengths: This submission did a really good job in presenting the stories of ADABOOST, ADABOOST.MH and the factorized version of ADABOOST.MH. The backgrounds are also clearly shown in the paper, which makes the researchers who are not familiar with this kind of topic can fast gain the main ideas of this paper. Actually, I enjoy the reading of this paper. Technically, it is interesting to transform the open problem to a minmax problem, and both n-independent lower bound and n-dependent lower bound are provided in this paper. The Khintchine inequality plays the key role in the proof. The proof is simple and easy to follow. The significance of this paper is very important, because it solves a long lasting open problem. Weaknesses: I did not find the significant weakness about this paper. But I still have some questions. I hope the author can address them. 1, Why there is the absolute value in the equations between lines 217 -218? 2, In Corollary 3.5, why the exponential error can be zero? Is this an asymptotic result or non- asymptotic result? 3, What are the derivations between lines 232 and 233? Can you provide more details? 4, Why the first equation below line 219 hold? Can you explain this? Technical Quality: 4 Clarity: 4 Questions for Authors: See above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors answers NA in the Checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition. We are glad you like our paper. Next, we answer your questions. 1. **Why there is the absolute value in the equations between lines 217 -218?** Answer: Because we are handling the $\ell_ 1$-norm of a vector, and the $\ell_ 1$-norm of a vector is the sum of the absolute value of its elements. 2. **In Corollary 3.5, why the exponential error can be zero? Is this an asymptotic result or non-asymptotic result?** Answer: We are sorry for confusing you, in fact it is a typo, we want to say that the **Hamming loss** can be zero. The reason why the Hamming loss becomes zero after $T^*$ steps is shown in the lines 237-238. It is a non-asymptotic result since we explicitly provide the convergence rate that is related to $T, K, \delta^\prime$. 3. **What are the derivations between lines 232 and 233? Can you provide more details?** Answer: For the given $\mathbf{W,X,Y}$, by Theorem 3.4, there exists $\mathbf{v}^{\max}$ such that $w_ {\Sigma}^\prime \ge \frac{1}{\sqrt{2K}} > 0$. Since $w_ {\Sigma}^\prime > 0$, we know that $\gamma\left( \mathbf{v}^{\max} , \varphi^*, \mathbf{W} \right) = \sum_ {i=1}^n w_ i^\prime \cdot y_ i^\prime \cdot \varphi^*(\mathbf{x}_ i) = w_ \Sigma^\prime \cdot \sum_ {i=1}^n \frac{w_ i^\prime}{w_ \Sigma^\prime} \cdot y_ i^\prime \cdot \varphi^*(\mathbf{x}_ i) \ge w_ \Sigma^\prime \delta^\prime \ge \frac{\delta^\prime}{\sqrt{2K}}$. The main skill of this step is to construct weights $\frac{w_ i^\prime}{w_ \Sigma^\prime}$ that form a distribution, the construction is dependent on the fact that $w_ \Sigma^\prime > 0$. 4. **Why the first equation below line 219 hold? Can you explain this?** Answer: In this line, we consider choosing $\mathbf{v}$ to be the Rademacher random vector with **independent** elements. By the independence, we can just replace the random variables $v_ 1, \dots, v_ K$ with $K$ independent Rademacher random variables $\varepsilon_ 1, \dots, \varepsilon_ K$. --- Rebuttal Comment 1.1: Title: response Comment: The author did a good job in answering my questions. I am happy to recommend the acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your reply, we are glad that you are satisfied with our answers.
null
null
null
null
null
null
Error Correction Output Codes for Robust Neural Networks against Weight-errors: A Neural Tangent Kernel Point of View
Accept (poster)
Summary: The authors applied ECOCs to DNNs to bolster the model’s resilience to weight errors. Most significantly, they have derived a perturbation bound through the utilization of the neural tangent kernel. According to their mathematical analysis, some principles of designing the ECOCs were revealed, leading to the construction of ECOCs based on Hadamard codes or direct optimization. Experimental results indicate that the proposed method is effective. I have not thoroughly examined the proofs of the theorems, but the theorems themselves appear to be plausible based on my understanding. Strengths: + The perspective presented under the NTK is novel. + The mathematical analysis within this work is sound and offers valuable contributions to the academic community, particularly in the context of considering the one-hot code as a specialized form of ECOC. + The experimental results indicate that the proposed method exhibits a considerable performance advantage over existing approaches. + The manuscript is well written. Weaknesses: N/A Technical Quality: 3 Clarity: 4 Questions for Authors: + Is there a parallel line of research analyzing the model’s robustness with one-hot/softmax outputs through the lens of the NTK? + Line 223, "Although the optimal correlation matrices remain unindentified". The authors should consider adding a relevant citation or providing an explanation to substantiate this claim. Could there be combinatorial approaches for constructing better codes? + Section 3.2. The use of $\ell$ (\ell in math mode) is preferred over $l$, as it helps to distinguish it from the numeral $1$. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very thankful for your effort to provide constructive comments and support this work. Our responses to your comments are summarized below. ### **Response to questions** **Q1). Is there a parallel line of research analyzing the model’s robustness with one-hot/softmax outputs through the lens of the NTK?** A1). To the best knowledge of the authors, this is the first work analyzing ECOC's effectiveness on the robustness of NNs against weight-errors in the NTK regime. Although the softmax activation makes the learning dynamic more complicated, the perturbation of the final pre-activation caused by weight-errors can still be described by Theorem 1. With some analysis on the softmax function, the results could be extended. We thank the reviewer for pointing it out. **Q2). Line 223, "Although the optimal correlation matrices remain unindentified". The authors should consider adding a relevant citation or providing an explanation to substantiate this claim. Could there be combinatorial approaches for constructing better codes?** A2). We apologize for the confusion. Our key point is that the generalization performance (of the NTK model) depends on not only the correlation matrix but also the dataset (or the task). To identify the best correlation matrix, we need to know the data distribution (from which the dataset is sampled), which is often not available in real applications. The "combinatorial approaches" mentioned by the reviewer is actually a good point. While the searching space is very large considering the code length and number of classes, one possible direction could be to employ mixed integer programming. We will explore these approaches in our future work. **Q3). Section 3.2. The use of $\ell$ (\ell in math mode) is preferred over $l$ as it helps to distinguish it from the numeral $1$** A3). We thank the reviewer for the suggestion. We will make changes in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you, I will keep my score.
Summary: The paper deals with the use of error-correcting output codes (ECOCs) for multi-class classification problem. The standard solution is to use one-hot code. Existing literature shows that there are codes which are better than one-hot code. At the same time there is a lack of theory and explanations of this phenomenon. The current paper aim to fill this gap and provide theoretical results which help to choose proper ECOC as well as provide guarantees. The authors utilize neural tangent kernel (NTK) approach for theoretical analysis. Strengths: The authors provide the results of theoretical analysis - this definitely the strong aspect of the paper. In particular, the main results are as follows: - the authors showed that replacing one-hot code with other ECOC is equivalent to the change of decoding metric from Euclidean distance to Mahalanobis distance. - the authors established an upper bound for the perturbation of DNN outputs (in case on weight-noise), which helps to choose the minimal distance of ECOC. - two ECOC constructions are proposed: optimization-based construction and the construction based on Hadamard matrix. - theoretical results are supported by numerical experiments. Weaknesses: I list the major weak points below: 1. The main contribution is repeated several times, namely in Introduction and in Section 4. 2. Please provide an explicit problem statement and describe the conditions. As I understand the main task is to deal with weight-error. 3. Please introduce the weight-error model explicitly. Now (in section 4.1) it is described by words but please define it formally. E.g. what does « weight-errors proportional to the weight scale» mean? The result is w(1+z), where z ~ N(0, sigma^2) or it is not true? 4. If you have the weight-error model, then in my opinion, it would be better to train such noisy NN. In this case ECOC should be learned automatically. Could you please make such a comparison? 5. Corollary 1 seems to be trivial and follows directly from the coding theory. It says that (for the guaranteed recovery) the norm of the errors should be less or equal then d/2. 6. «Although it remains unclear which correlation matrices yield the best performance for ECOCs in absence of weight-errors, we know that codewords that are approximately orthogonal generally perform comparably to one-hot codes. Consequently, it is reasonable to regularize the orthogonality of the codewards during the ECOC construction». This is strange statement for the theoretically oriented paper. Actually the correlation is related to the minimum distance, namely $||a-b||^2 = ||a||^2 + ||b||^2 - 2 a^T b$ (codeword norms are usually fixed). 7. There are many ways to construct good codes over {+-1} alphabet. Usually you should start with good binary code (e.g. Reed-Muller code) and map 0 to +1 and 1 to -1. There exist tables of good binary codes. Please try to use them and compare to your solutions. I believe your Method 2 to be some particular case of this approach. 8. Experimental results demonstrate that constructed codes are better than one-hot code. At the same time my question is as follows. Is ECOC itself strategy beneficial? Could you please improve some state-of-the-art classifier with this approach? Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: In my opinion, the limitation section should be increased, please discuss the limitations of your method rather that «hardware acceleration». Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your feedback and provide our responses below: **A1).** We will rephrase in the revised manuscript. **A2).** Recall that $f(\cdot; \theta)$ is the NN with weights $\theta$. Let $\mathcal{A} (\mathcal{E}, f, \mathcal{D};u)$ be a training algorithm, which takes the ECOC $\mathcal{E}$, NN architecture $f$, and dataset $\mathcal{D}$ as input and outputs the trained weights. Here $u$ accounts for the randomness in the algorithm. Define the test error $\mathcal{T}(\theta, \mathcal{D}) = \sum_{(x,c)\in \mathcal{D}}\mathbf{1}[D(f(x;\theta))-c]$, then the problem is $$\min_{\mathcal{E}} \mathsf{E}_{\Delta\theta} \mathsf{E}_u \mathcal{T}(\mathcal{A} (\mathcal{E}, f, \mathcal{D};u)+\Delta\theta, \mathcal{D})$$ where $\Delta\theta$ is the random weight-errors. **Conditions:** NTK assumptions. This statement is for clarification, and we translate it into a trackable form in eq. (12). **A3).** Not True. The perturbed weights $\tilde{\theta} = \theta + \Delta\theta$, where entries of weight-errors $\Delta\theta$ are i.i.d. Gaussian with zero-mean and $\frac{\bar{\sigma}^2}{n}$-variance. We chose $\frac{\bar{\sigma}^2}{n}$ because: 1). On hardware devices, the noise is proportional to the scale of the weights. 2). In the NTK regime, the weights after training are of order $1/\sqrt{n}$ at the same scale of its initialization. **A4).** Noise-injection training (NIT) is an approach orthogonal to ECOC to combat weight-errors (see, e.g. [1]). However, it suffers from unstable training and long training time because each weight experiences variations independently in every training iteration. We compare the **accuracy** and **training time** of NIT and ECOC on MLP & MNIST task below, where vanilla is the original NN with one-hot code and NITs are noisy training NNs with different standard deviations. Accuracy: |method/noise level| 0 | 0.03 | 0.05 | 0.08 | 0.1 | |---------- |:----------: | :---------: | :---------: | ---------: | :----------: | | vanilla | 98.44 | 88.67 | 35.46 | 11.61 | 10.80 | | NIT 0.03 | 98.51 | 98.02 | 93.44 | 24.56 | 11.42 | | NIT 0.05 | 98.16 | 97.85 | 96.66 | 73.25 | 23.50 | | NIT 0.1 | 10 | 10 | 10 | 10 | 10 | | ECOC (ours) | 98.51 | 97.04 | 84.33 | 42.47 | 26.54 | | ECOC (ours) + NIT 0.05 | 98.14 | 98.03 | 97.33 | 88.60 | 66.51| Training time (on NVIDIA RTX A6000 GPU) | Method | Training time | | ------------- | :----------: | | vanilla | 414s | | NIT | 3355s | | ECOC (ours) | 410s | | ECOC (ours) + NIT | 3309s | These results show that: 1) NIT improves the robustness but it significantly increases the training time; 2) training noise needs to be carefully picked when using NIT; 3) integrating the ECOC and NIT yields the best result. **A5).** Corollary 1 bears resemblance to the aforementioned coding theory result (i.e. half distance greater than the error), but is **different** in several aspects: 1) The definition of the minimum distance is different in that ours involves code length $n_L$ (see Eq. (10)). 2) The right hand side (RHS) of Eq. (11) is different from coding theory. In addition to weight-errors, which correspond to the coding theory result, the RHS of Eq. (11) accounts for the "confidence" of clean inference because the clean inference may fall far from a codeword. 3) While the classic coding theory result is deterministic, Corollary 1 is probabilistic. While Corollary 1 parallels the coding theory result, it **does not follow directly from the coding theory**. In fact, the derivation of Corollary 1 is quite involved and different from that for its counterpart in the coding theory. Hence, Corollary 1 is non-trivial. **A6).** We believe this stems from a misunderstanding of the our theoretical results. We claim that large codewords distance matters to the NN's robustness to weigh-errors. However, in terms of clean performance (in absence of weight errors), a good choice is to choose codewords to be orthogonal (based on the success of one-hot codes in absence of weight-errors), which does not directly relate to the codewords distance. In fact, our ECOC construction is a trade-off between clean performance and robustness, accounting for the final performance of NN with weight-errors. **A7).** Our Method 2 presented in Section 5.3 indeed constructs ECOC from good codes such as the Hadamard codes as suggested by the reviewer. Coincidentally, Method 2 uses a union of Hadamard codes and its complement, which is actually RM(1,N) code. Given a good binary code, the **challenge** is how to select a subset of codewords from it to form an ECOC. If codewords are randomly picked without methodology, it does not necessarily lead to the optimal performance. We compare the performances of these codes on the MNIST/MLP task: |method/noise level| 0 | 0.03 | 0.05 | 0.08 | 0.1 | | ------------- | :-----------: | :----------: | :-----------: | :-----------: | :----------: | | Reed-Muller (n=128) | 98.51 | 96.81 | 81.52 | 39.27 | 24.98 | | Hadamard (n=128) | 98.51 | 96.90 | 83.52 | 40.27 | 25.43 | | Ours (n=128) | 98.51 | 97.04 | 84.33 | 42.47 | 26.54 | Our proposed method has better performance than ECOCs formed by randomly picking codewords from either the RM or Hadamard code. **A8).** ECOC strategy itself can be beneficial only if codewords are properly designed ([2] shows a case of ECOC with lower clean accuracy than one-hot). Without loss of generality, we provide the results of ResNet-50/TinyImageNet suggesting ECOC is beneficial. We believe more followup researches should be conducted to further enrich the potentials of ECOC in NNs, which has been far less explored yet. |method| accuracy | | ------------- | :-----------: | | Vanilla | 48.74 | | Ours | 57.64 | [1] Y. Long et al. "Design of reliable DNN accelerator with un-reliable ReRAM." IEEE DATE, 2019. [2] A. Yu, et al. "COLA: orchestrating error coding and learning for robust neural network inference against hardware defects." ICML, 2023. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Vn9W, Thank you for reviewing our work and providing feedback that helps us improve the quality of our paper. We provided detailed responses according to your feedback, including some experiment results that you’re curious to see. In this regard, we hope you could take some time to take a look at our results. If you have any further questions or advice, please feel free to let us know. We’re looking forward to your kind response. We kindly remind that the deadline of the discussion period is tomorrow August 13th AOE. Your response is highly appreciated. Thank you! Regards, Authors --- Reply to Comment 1.1.1: Title: Gentle Reminder: Please kindly review our rebuttal (From the authors) Comment: Dear Reviewer Vn9W, Thank you once again for your valuable feedback on our work. We wanted to kindly remind you that our rebuttal is now ready for your review. We fully appreciate your busy schedule and the additional effort this process requires. As the reviewer-author discussion period is concluding in just a few hours, and the other two reviewers have responded positively to our rebuttal, your response and advice will be crucial to our work. We would greatly appreciate your prompt attention to this matter. Thank you very much for your attention. Best, Authors
Summary: This paper studies theoretical foundations of Error correcting output code for multi-class classification by means of coding theory and NTK. NTK is employed to alter the decoding metric from l2 to Mahalanobis norm and based on that a few code construction methods are proposed. Strengths: The NTK sees the model as an Gaussian process, making it a powerful tool for analysis of neural networks’ convergence and generalization. This paper derives an interesting connection between decoding function in the infinite width regime and NTK formulation of Neural networks. The derivation of bounds are clear and the codes are tested on different tasks and datasets which is promising. Weaknesses: The presentation of the paper could be further improved. Please consider the following: - All parameters need to be defined in the main text. In (6), E([C]) and in (8) $\bar{\delta}$ are not defined. -There are sporadic grammatical errors such as "... then training DNNs in the NTK regime minimizing MSE will result ...." "... adopt a fully-connected feed-forward neural networks with L layers..." "... can reduce more generalization error...." "...each training uses a different randomly generated codes..." - in eq (20), <= should be = . -I see in the NTK papers a regularized form of (4) as $(K(x,x)+\lambda I)^{-1}$, therefore no need to make assumption 3. -NTK is merely used in the proof of theorem 1 and is not the main topic of this paper. I feel it is overemphasized in the title of the paper. please consider choosing a better title. Technical Quality: 3 Clarity: 3 Questions for Authors: -What is the reason you present your work in the context of NTK? I see that you make a connection between ECOC and NTK because both are in the infinite width regime. However you don't use NTK to analyze the training dynamics of the networks. - Given the sensitivity of your analysis to width of the networks, I expected some figures demonstrating the effect of increasing number of neurons on performance. Could you provide such a result to show in what regime your codes are optimal? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: NTK theory in general holds in the infinite width limit. Further discussions are needed to adapt the methods in this paper to finite regime. For instance empirical NTK is developed in the literature to address the finite width regime. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for taking your time to review and support our work. We appreciate your constructive feedback and provide our responses below: ### **Response to weakness 1** **Q1). All parameters need to be defined in the main text. In (6), $\mathcal{E}([C])$ and in (8) $\bar{\sigma}$ are not defined. -There are sporadic grammatical errors such as "... then training DNNs in the NTK regime minimizing MSE will result ...." "... adopt a fully-connected feed-forward neural networks with $L$ layers..." "... can reduce more generalization error...." "...each training uses a different randomly generated codes..."** **In eq (20), <= should be = .** A1) We apologize for the typos, grammar errors and missed definitions. We will fix all these issues. In Eq. (6), $\mathcal{E}([C])$ is an $n_L$-by-$C$ code matrix, where each column is a codeword $\mathcal{E}(i)$ for $1 \leq i \leq C$. In Eq. (8), $\bar{\sigma}$ is defined in Sec. 4.3.1 (weight-error model): we add noise with variance $\bar{\sigma}^2/n$ to the weights, where $n$ is a luxury variable defining the network width as specified in Sec. 3.2. Specifically, the $l$-th layer has a width of $\alpha_l n$ with a constant $\alpha_l$. The reasons behind this model are as follows: 1). On hardware devices, the noise is proportional to the scale of the weights. (The signal-to-noise ratio (SNR) matters.) 2). In NTK regime, the weights after training is of order $1/\sqrt{n}$ at the same scale of its initialization. For Eq. (20), we believe it should be $\leq$ because of Chebyshev’s inequality. Notice that in the proof, we only claim $n$ is large enough instead of approaching infinity. If $n$ approaches infinity, the first $\leq$ should be $=$ as suggested. ### **Response to weakness 2** **Q2). I see in the NTK papers a regularized form of (4) as $(\mathcal{K}(\mathcal{X},\mathcal{X})+I)^{-1}$, therefore no need to make assumption 3. -NTK is merely used in the proof of theorem 1 and is not the main topic of this paper. I feel it is overemphasized in the title of the paper. please consider choosing a better topic.** A2) We appreciate your valuable suggestions. First, we agree that Assumption 3 can be removed if $l_2$ regularization of the weights is applied during model training. Second, regarding the title, since the focus of this manuscript is to establish the foundation of ECOC design within the context of neural networks, with the NTK being used as a tool to analyze the behavior of neural networks trained with ECOC, we could modify the title as "On the Efficacy of Error Correction Output Codes for Robust Neural Networks Against Weight-Errors” per your suggestion if the conference rule allows. ### **Response to Questions** **Q3). What is the reason you present your work in the context of NTK? I see that you make a connection between ECOC and NTK because both are in the infinite width regime. However you don't use NTK to analyze the training dynamics of the networks.** A3). The reason we present the work in the context of NTK is as follows: Existing research on ECOC's applicability to modern deep neural networks (DNNs) mainly focuses on directly adopting well-known error correction codewords from the communication domain. It does not consider the unique information processing of DNNs, leading to suboptimal results. In our work, we want to provide guidance on ECOC design dedicated to DNNs through analyzing how DNNs behave after being trained with different codes. However, it is nontrivial given NN weight space is usually too complicated to analyze, we therefore leverage the NTK model to make the problem more tractable for the first time. In this regard, our focus is not on the dynamics of NTK itself, but rather on the consequences of training NNs (with ECOC) within the NTK regime. More specifically, our work relates to NTK from the following aspects: 1) In Proposition 1 (without weight-errors), we use the expression of NNs resulting from kernel-ridge regression (the dynamic of NTK). 2) Theorem 1 relies on the assumptions and intermediate results within the NTK regime. **Q4). Given the sensitivity of your analysis to width of the networks, I expected some figures demonstrating the effect of increasing number of neurons on performance. Could you provide such a result to show in what regime your codes are optimal?** A4) We added the experiment results on MNIST dataset with a 2-hidden layer MLP where the width of the layers is 512, 1024, 2048, or 4096. We construct the code using Method 1 with code length 128. The results are presented as follows: | width \ noise scale | 0 | 0.03 | 0.05 | 0.08 | 0.1 | | ----- | :------: | :-----: | :------: | :------: | :------: | | 512 | 98.43 | 89.75 | 59.82 | 26.86 | 18.67 | | 1024 | 98.48 | 93.23 | 66.52 | 28.80 | 19.37 | | 2048 | 98.50 | 95.48 | 73.40 | 32.30 | 20.99 | | 4096 | 98.51 | 97.04 | 84.33 | 42.47 | 26.54 | From the table we can observe that the robustness of NNs is improved when the network width $n$ becomes larger. This is because the success rate of the bound derived in the manuscript increases as $n$ becomes larger. ### **Response to limitations** In this work, only proposition 1 is based on the infinite width assumption. Both Theorem 1 and Corollary 1 assume $n$ is large enough instead of infinity (Our statement involves slack variable $\delta$ in the bound and the bound success rate in the form of ($1-...$)). We thank the reviewer for pointing out the empirical NTK. We leave refining the proposition 1 to the finite width version in future work. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns, I will keep my score.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game
Accept (poster)
Summary: The Stackelberg Prediction Game (SPG) models with the least squares loss variant (SPG-LS) gaining attention. The spherically constrained least squares (SCLS) method is the latest state-of-the-art method for solving the SPG-LS problem. The authors address the lack of theoretical error analysis for the SCLS method. By transforming the estimation error into an Auxiliary Optimization (AO) problem using the Convex Gaussian Min-Max Theorem (CGMT), the authors provide a theoretical error analysis, confirming the SCLS method's reliability. Experimental results validate the theorems. Strengths: The authors focus on the Spherically Constrained Least Squares (SCLS) method which is the state-of-the-art method for solving the SPG-LS problem. The lack of theoretical error analysis of the SCLS method restricts its large-scale applications. The authors investigate the estimation error between the learner obtained by the SCLS method and the actual learner. Therefore, the content of this paper is original. The writing quality is high, and the argumentation process is clear. Moreover, this analysis strengthens the theoretical framework of the SCLS method and confirms the reliability of the learner produced by the SCLS method. Thus, the authors’ contribution is significant in promoting the large-scale applications of the SCLS method. Weaknesses: 1. The original paper of the SCLS method focuses on optimization (9), but the authors explore the SCLS problem (6). What are the differences between (6) and (9)? Why can the authors replace optimization (9) with (6)? 2. How do the authors derive optimization (32) from (31)? 3. There is a remark for Theorem 3.2, however, Theorem 3.1 lacks explanation. The authors should add explanatory content to Theorem 3.1 to improve its readability. 4. Theorem 3.1 appears very similar to Theorem 3.2. What’s the relationship between Theorem 3.1 and Theorem 3.2? Technical Quality: 3 Clarity: 3 Questions for Authors: See the Weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: In the Appendix, the limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Answer to Reviewer XvKF Dear Reviewer XvKF, Thank you for your job in reviewing our paper. We are very sorry for the inconvenience caused by our presentations. To this end, following your comments, we correct our work in the revision. All the references appear in our main paper. In regards to the weaknesses: __Weakness 1.__ The original paper of the SCLS method focuses on optimization (9), but the authors explore the SCLS problem (6). What are the differences between (6) and (9)? Why can the authors replace optimization (9) with (6)? __Answer:__ Thank you for your comments. The SCLS problems (6) is equivalent to optimization (9). $$\begin{align*} \min _{\pmb{r}}q(\pmb{r}), \quad \text{s.t.}\quad \pmb{r}^T\pmb{r} = 1,\tag{9} \end{align*}$$ where $ q(\pmb{r}) = ||\hat{L}\pmb{r}-(\pmb{y}-\pmb{z}/2)||^2$, $\hat{L} = \begin{pmatrix}\frac{\sqrt{\gamma}}{2}\pmb{X}&\frac{\pmb{z}}{2}\end{pmatrix}$ and $\pmb{r}^\top=\begin{pmatrix} \tilde{\pmb{w}}\\ \tilde{\alpha} \end{pmatrix}.$ Then, $$ \begin{align*} q(\pmb{r})&= ||\hat{L}\pmb{r}-(\pmb{y}-\pmb{z}/2)||^2= ||\begin{pmatrix}\frac{\sqrt{\gamma}}{2}\pmb{X}&\frac{\pmb{z}}{2}\end{pmatrix}\begin{pmatrix} \tilde{\pmb{w}}\\ \tilde{\alpha} \end{pmatrix}^\top-(\pmb{y}-\pmb{z}/2) ||^2\\\\ &=\Vert \frac{\tilde \alpha}{2} \pmb{z} + \frac{\sqrt{\gamma}}{2}\pmb{X}\tilde{\pmb{w}} - (\pmb{y} - \frac{\pmb{z}}{2})\Big\Vert^2. \end{align*} $$ Additionally, the constraint can be transformed to $$ \begin{align*} 1=\pmb{r}^T\pmb{r}=\begin{pmatrix} \tilde{\pmb{w}}\\ \tilde{\alpha} \end{pmatrix}^\top\begin{pmatrix} \tilde{\pmb{w}}\\ \tilde{\alpha} \end{pmatrix}=\tilde{\pmb{w}}^\top\tilde{\pmb{w}} + \tilde \alpha^2. \end{align*} $$ Therefore, the optimization (9) is equivalent to SCLS problems (6): $$\begin{align} \min\limits _{\tilde{\pmb{w}}, \tilde{\alpha}}\quad {\tilde v}(\tilde{\pmb{w}},\tilde \alpha) \triangleq \Big\Vert \frac{\tilde \alpha}{2} \pmb{z} + \frac{\sqrt{\gamma}}{2}\pmb{X}\tilde{\pmb{w}} - (\pmb{y} - \frac{\pmb{z}}{2})\Big\Vert^2,\quad {\text{s.t.} }\; \tilde{\pmb{w}}^\top\tilde{\pmb{w}} + \tilde \alpha^2 = 1. \tag{6} \end{align}$$ __Weakness 2.__ How do the authors derive optimization (32) from (31)? __Answer:__ Thank you for your comments. We are very sorry for the inconvenience caused by our presentations. The detailed derivation process from (31) to (32) is: $$\begin{align*} \min _{\tilde{\pmb{\beta}}}\max _{\eta \geq 0} \frac{1}{n}\Big(\sqrt{\Vert\tilde{\pmb{\beta}}\Vert^2+(\pmb{c}^\top\tilde{\pmb{\beta}})^2+\frac{4\sigma^2}{\gamma}}\cdot\Vert\pmb{g}\Vert \eta + \eta\pmb{h}^\top\tilde{\pmb{\beta}} -\frac{\eta^2}{4}\Big). \tag{31} \end{align*}$$ The formulation (31) is a quadratic function of $\eta$ with the symmetric axis: $$\begin{align*} \eta _s=2\Big(\sqrt{\Vert\tilde{\pmb{\beta}}\Vert^2+(\pmb{c}^\top\tilde{\pmb{\beta}})^2+\frac{4\sigma^2}{\gamma}}\cdot\Vert\pmb{g}\Vert+\pmb{h}^\top\tilde{\pmb{\beta}}\Big) >\Vert\tilde{\pmb{\beta}}\Vert(\Vert\pmb{g}\Vert-\Vert\pmb{h}\Vert). \end{align*}$$ Additionally, $\eta _s(\Vert\pmb{g}\Vert+\Vert\pmb{h}\Vert)>\Vert\tilde{\pmb{\beta}}\Vert(\Vert\pmb{g}\Vert^2-\Vert\pmb{h}\Vert^2)$. $\Vert\pmb{g}\Vert^2$ and $\Vert\pmb{h}\Vert^2$ concentrate around their means $n$ and $d$, respectively. Consequently, the value around which $\eta _s$ concentrates is nonnegative, due to $d/n<1$. Moreover, taking $\eta _s$ into (31): $$\begin{align*} &\min _{\tilde{\pmb{\beta}}} \frac{1}{n}\Big(\sqrt{\Vert\tilde{\pmb{\beta}}\Vert^2+(\pmb{c}^\top\tilde{\pmb{\beta}})^2+\frac{4\sigma^2}{\gamma}}\cdot\Vert\pmb{g}\Vert + \pmb{h}^\top\tilde{\pmb{\beta}} \Big )^2\\\\ =& \min _{\tilde{\pmb{\beta}}} \frac{1}{n}\Big[(\Vert\tilde{\pmb{\beta}}\Vert^2+(\pmb{c}^\top\tilde{\pmb{\beta}})^2+\frac{4\sigma^2}{\gamma})\Vert\pmb{g}\Vert^2+ (\pmb{h}^\top\tilde{\pmb{\beta}})^2+2\pmb{h}^\top\tilde{\pmb{\beta}}\Vert\pmb{g}\Vert \sqrt{\Vert\tilde{\pmb{\beta}}\Vert^2+(\pmb{c}^\top\tilde{\pmb{\beta}})^2+\frac{4\sigma^2}{\gamma}}\Big].\tag{32} \end{align*}$$ (Please refer to lines 221-228 of our main paper.) __Weakness 3.__ There is a remark for Theorem 3.2, however, Theorem 3.1 lacks explanation. The authors should add explanatory content to Theorem 3.1 to improve its readability. __Answer:__ Thank you for your suggestion. Following your recommendation, we will add the following remark for Theorem 3.1 in our revision to explain its content: Remark: Theorem 3.1 indicates that, as $n$ goes to $\infty$, the parameter vector $\tilde{\pmb{w}}^*$ learned through the SCLS method (6) reliably converges to actual parameter vector $\tilde{\pmb{w}} _0$ in probability. We then can utilize Theorem 2.4 to establish the estimation error of SPG-LS (1) solved by the SCLS (6). __Weakness 4.__ Theorem 3.1 appears very similar to Theorem 3.2. What’s the relationship between Theorem 3.1 and Theorem 3.2? __Answer:__ Thank you for your comments. Theorem 3.1 indicates that $$\begin{align*} \lim _{n\to \infty}\Vert\tilde{\pmb{w}}^*-\tilde{\pmb{w}} _0\Vert \stackrel{P}{\longrightarrow} 0, \end{align*}$$ which corresponds to the estimation error analysis of the SCLS problem (6). Theorem 3.2 demonstrates that $$\begin{align*} \lim _{n \to \infty}\Vert\pmb{w}^*-\pmb{w} _0\Vert \stackrel{P}{\longrightarrow} 0, \end{align*}$$ which corresponds to the estimation error of SPG-LS (1) recovered from the SCLS problem (6). Therefore, Theorem 3.1 is different from Theorem 3.2, and Theorem 3.1 can be used to derive Theorem 3.2. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the reply. They have addressed my questions well. I am pleased to keep my positive score. --- Reply to Comment 1.1.1: Title: Appreciation to Reviewer XvKF Comment: Thank you very much for your acknowledgment and efforts.
Summary: The paper investigates the estimation error of the learner obtained through the SCLS method proposed in [3] in comparison to the actual learner. It redefines the estimation error of the SCLS method as a Primary Optimization (PO) problem and applies the Convex Gaussian min-max theorem (CGMT) to convert the PO problem into an Auxiliary Optimization (AO) problem. Subsequently, a theoretical error analysis for the SCLS method is presented based on this simplified AO problem. This analysis validates the accuracy of the strategy generated by the SCLS method from [3]. Strengths: The paper introduces a method to analyze the error of SCLS. And this method is applicable to other statistical learning algorithms. Weaknesses: 1. The reason for ``the lack of theoretical analysis on the error of the SCLS method limits its large-scale applications'' is unclear, which is very important to the motivation of this paper. Therefore, I failed to understand the importance of studying this problem, as it appears to be artificially created. 2. The notations in this article are somewhat ambiguous, hindering readability. For instance, the notations $d\in \mathbb{N}$ and $n=n(d)$ in Definition 2.5 (GMT admissible sequence) are identical to the notation for the data dimension $X\in \mathbb{R}^{n\times d}$. 3. The assumption ``The data $X$ is drawn i.i.d. from $N(0,1)$'' appears to be overly restrictive within the framework of the Stackelberg prediction game. 4. Section 2.1 of this paper is redundant as all the relevant information is extracted from reference [3]. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Line 61: What is the main distinction between $w_0$ and $w^*$? 2. Line 114: The notation ($\omega,~\alpha$) appears twice. 3. Line 155: The authors state that "a challenging PO problem can be replaced with a simplified AO problem." Could the authors provide more explanation on where the challenge in the PO problem lies and why the AO problem is simpler? Additionally, on page 7, line 213, why is PO (28) more complex than AO (29)? Where does the difficulty in (28) lie? 4. Line 161: $(u)$ should be changed to $f^*(u)$. 5. Line 170: The authors should explain why the second equation in (16) is valid. 6. Line 178: I am curious as to why the variables $\tilde{w_0}$ and $\tilde{\alpha_0}$ are not included as variables in the objective function of problem (21). 7. Line 202: As the authors state, "This equivalence allows for the substitution of the analysis of the optimal cost $\tilde{\beta}^*$ in SCLS (21) with the analysis of the optimal solution $\hat{\beta}^*$ in optimization (26)." We can see that (21) is nonconvex, while (26) is an unconstrained convex optimization. Does this mean the authors have equivalently reformulated a nonconvex optimization problem into a convex optimization? 8. It is reasonable to consider whether the limit of $\lim_{n\rightarrow \infty}\frac{d}{n}$ falls within the interval $[(0,1)]$ since this assumption is not adopted in [3]. Furthermore, it is impractical to assume that $d/n < 1$, as the features of the data may surpass the sample size. 9. In experiments, why the true parameter $w_0$ is generated randomly? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer PFXk, Thank you for the detailed and thorough review. All the references can be seen in Author Rebuttal by Authors. ### For Weaknesses: __W1:__ The SCLS method [3] is the latest advanced technique for solving the widely applicable SPG-LS problem, winning the ICML 2022 Outstanding Paper Award. However, the SCLS method lacks error analysis, and we have filled this gap. Error analysis of algorithms is a very crucial and popular topic in the field of machine learning, as evidenced by several papers [B1-B8]. Therefore, it is important to study the error of the SCLS algorithm. We will clarify this sentence in the revision as follows: The paper [3] lacks theoretical analysis on the error of the SCLS method. __W2:__ In our paper, $d$ denotes the sample dimension and $n$ is the number of samples. Thus, $d$ in Definition 2.5 and $d$ in $X\in R^{n\times d}$ both represent the sample dimension, meanwhile, $n$ in Definition 2.5 and $n$ in $X\in R^{n\times d}$ both denote the number of samples, ensuring consistency and clarity. (See lines 31-33) __W3:__ The SCLS method is currently the state-of-the-art for solving SPG-LS, having won the ICML 2022 Outstanding Paper Award [3]. However, [3] lacks theoretical analysis on the error of the SCLS method. To the best of our knowledge, we are the first to investigate the error of the SCLS method. The primary contribution of our paper is to provide a theoretical perspective on the error of the SCLS method under Gaussian assumption. Our limitation section highlights this Gaussian assumption (See lines 372-375). Reviewer btrD also identifies this limitation but acknowledges our contribution under Gaussian settings. It is worth emphasizing that the Gaussian hypothesis is a commonly used approach for theoretical analysis of algorithms in machine learning, as evidenced by papers [C1~C8]. Thus, we investigate the error of the SCLS algorithm under Gaussian settings. __W4:__ Since our aim is to study the error of the SCLS method, it is important to provide an overview of this method. Section 2.1 is the preliminaries to introduce the necessary concepts and theorems about the SCLS method from [3]. These concepts and theorems are frequently used and play a crucial role in the subsequent derivation of our manuscript. Following your advice, we will simplify Section 2.1 in our revision. ### For Questions: __Q1:__ $w_0$ is the known parameter of the SPG-LS model (1), while $w^*$ learned by the SCLS method is an estimator of $w_0$. (See lines 61-66) __Q2:__ We will rectify it in revision. __Q3:__ PO problem contains the matrix $G\in R^{n\times d}$ and the challenge lies in the processing of matrices. The inclusion of a matrix in the PO problem increases the complexity of analysis. In contrast, the AO problem only contains vectors with dimensions $d$ or $n$, where vectors are easier to handle than matrices. Additionally, the AO problem reduces the dimension of the PO problem from $n\times d$ to max{$d, n$}, thereby simplifying the PO problem. According to the analysis of the PO and AO problems above, the reasons why PO (28) is more complex than AO (29) and the difficulties of (28) are summarized as follows: (i). The PO (28) contains the matrix $X\in R^{n\times d}$ and the challenge lies in processing matrices. the AO (29) only contains vectors with dimensions $d$ or $n$, which are easier to handle than matrices. (ii). The AO (29) reduces the dimension of the PO (28) from $n\times d$ to max{$d, n$}, thereby simplifying the PO (28). (iii). It is difficult to obtain the value that the PO (28) concentrates on. (iv). The AO problem (29) can be further simplified by AO optimization (33) that only includes estimation error variable $\tilde{\beta}$, which is easier to analyze than PO (28). __Q4:__ We will rectify it in revision. __Q5:__ The SPG-LS model (1) can be expressed as : $$\min_ w||X^ *w-y||^2,s.t.X^ * =argmin_ {\hat X}||\hat Xw-z||^2+\gamma|| \hat X-X||_ F^2.\tag{1}$$ [1, 2, 3] reformulate SPG-LS (1) into : $$\inf_ {w}\Big\Vert\frac{\frac{1}{\gamma}zw^Tw+Xw}{1+\frac{1}{\gamma}w^Tw}-y\Big\Vert^2,\tag{4}$$ which indicates that $$X^*w = \frac{\frac{1}{\gamma}zw^Tw+Xw}{1+\frac{1}{\gamma}w^Tw}.$$ Therefore, if $w=w _0$, we have $$y=X^*w _0+\epsilon=\frac{\frac{1}{\gamma}zw _0^Tw _0+Xw _0}{1+\frac{1}{\gamma}w _0^Tw _0}+\epsilon=\frac{\alpha _0z+Xw _0}{1+\alpha _0}+\epsilon,\tag{16}$$ where $\alpha_0=w_0^Tw_0/\gamma$. (See lines 92-95 and 169-170) __Q6:__ According to our answer to Q1, $w_0$ is assumed to be known. Since $\alpha_0=w_0^Tw_0/\gamma$, both $\tilde{w}_0$ and $\tilde{\alpha}_0$ are known constants, not variables. __Q7:__ Following the approach of [D1-D6], (26) is the first-order approximation of (21) where the derivation process is in lines 183-199. We will clarify this point in our revision. __Q8:__ To the best of our knowledge, we are the first to investigate the error of the SCLS method, and the theoretical analysis in our paper successfully explains the behavior of the SCLS method when $d/n<1$. Thus, the primary contribution of our paper is to provide a theoretical perspective on the error of the SCLS method under certain conditions. Analyzing the error of the SCLS method under broad conditions is beyond the scope of this paper. We plan to investigate the error results of the SCLS method for $d/n>1$ in our future work. __Q9:__ According to our answer to Q1, $w_0$ represents the ``true'' weight parameter of the SPG-LS model, which is assumed to be known. Therefore, once selected, $w_0$ should be fixed in experiments. In addition to random generation, we can also manually set a constant value for $w_0$. We sincerely thank you once again for your time and effort in reviewing our paper. We hope that our answers have met your expectations and satisfaction. --- Rebuttal Comment 1.1: Title: responce Comment: I would like to thank the authors for their responses. While I may not be fully familiar with the topic of error analysis, I remain uncertain about the sufficiency of the authors' contribution. However, after considering the feedback from other reviewers, I have decided to raise the score to 5. --- Reply to Comment 1.1.1: Title: Appreciation for Raising Score Comment: We appreciate your decision to raise our score. We will further emphasize our contribution in the revision. --- Rebuttal 2: Title: Kindly Requesting Confirmation on Responses Comment: Dear Reviewer PFXk, We hope this message finds you well. I am reaching out to kindly request your prompt response to confirm whether our responses adequately address your queries. We sincerely thank you for your time and effort during this discussion period. Your timely feedback is greatly appreciated.
Summary: The spherically constrained least squares reformulation method proposed by Jiali et al. has shown superior performance in addressing the issues of the Stackelberg prediction game. This paper aims to analyze the error between the estimators and the ground truth. The main theory shows that the estimation error approaches to zero in probability. The empirical studies verify the claims appear in the paper. Strengths: - This paper studied the seminal work of SPGs published in ICML 2022. They are the first to conduct the estimation error analysis. - Technically, they first reformulate the estimation error of the SCLS method as a PO problem. Then, it is novel to use the CGMT to reduce the PO problem into the AO problem. Some derivations are performed to further simplify the AO problem, as shown in Eq. (33). They finally analyze the simple Eq. (33) and present two asymptotic main theories. The empirical studies are also conducted to verify the theory. Overall, the presentation is clear and the paper seems to be a theoretical solid paper. Weaknesses: - As pointed by the author in the checklist, the main weakness is that this paper assumes the Gaussian inputs. This paper may motivate further research to remove or weaken the assumptions. - Eq.(13) and Eq.(14) miss the transpose symbol $T$. - Some derivations need to be more clear, for example, in Line 211, how can the author derive the Eq.(29)? - The asymptotic results of CGMT relies on $d$, while Theorems 3.1 and 3.2 holds when $\lim_{n\to infty}$. Could the authors say something about this? Technical Quality: 3 Clarity: 3 Questions for Authors: - How can the authors derive Eq.(29)? - Could the authors specify the feasibility of replacing $d$ required by CGMT with $n$ in their conclusions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Answer to Reviewer btrD Dear Reviewer btrD, Thank you for your job in reviewing our paper. We are very sorry for the inconvenience caused by our presentations. We extend our heartfelt gratitude for your patience and meticulous guidance. Your insightful comments is valuable for us and we appreciate the opportunity to address your concerns. ### In regards to the weaknesses: __Weakness 1.__ As pointed by the author in the checklist, the main weakness is that this paper assumes the Gaussian inputs. This paper may motivate further research to remove or weaken the assumptions. __Answer:__ Thank you for your comments. To the best of our knowledge, we are the first to investigate the error of the SCLS method. The primary contribution of our paper is to provide a theoretical perspective on the error of the SCLS method under the Gaussian assumption. It is worth emphasizing that the Gaussian hypothesis is a commonly used approach for theoretical analysis in the field of machine learning, as evidenced by papers [A1~A8]. The error analysis of the SCLS method under general or weaker conditions is beyond the scope of this paper. We will investigate the results of non-Gaussian settings in our future work. [A1]. Alexander Camuto, Matthew Willetts, Umut Simsekli, Stephen J. Roberts, Chris C. Holmes: Explicit Regularisation in Gaussian Noise Injections. NeurIPS 2020 [A2]. Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan: Communication-Constrained Bandits under Additive Gaussian Noise. ICML 2023: 24236-24250 [A3]. Matthew Joseph, Alexander Yu: Some Constructions of Private, Efficient, and Optimal K-Norm and Elliptic Gaussian Noise. COLT 2024: 2723-2766 [A4]. Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris C. Holmes, Mert Gürbüzbalaban, Umut Simsekli: Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections. ICML 2021: 1249-1260 [A5]. Holden Lee, Chirag Pabbaraju, Anish Prasad Sevekari, Andrej Risteski: Pitfalls of Gaussians as a noise distribution in NCE. ICLR 2023 [A6]. Christos Thrampoulidis, Ehsan Abbasi, Babak Hassibi: Precise Error Analysis of Regularized M-Estimators in High Dimensions. IEEE Trans. Inf. Theory 64(8): 5592-5628 (2018) [A7]. Yufeng Zhang, Jialu Pan, Li Ken Li, Wanwei Liu, Zhenbang Chen, Xinwang Liu, Ji Wang: On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions. NeurIPS 2023 [A8]. Seunghyuk Cho, Juyong Lee, Dongwoo Kim: Hyperbolic VAE via Latent Gaussian Distributions. NeurIPS 2023 __Weakness 2.__ Eq.(13) and Eq.(14) miss the transpose symbol $T$. __Answer:__ Thank you for your comments. We are very sorry for this clerical error and we will add the transpose symbol $T$ to rectify Equations (13) and (14) in our revision. __Weakness 3.__ Some derivations need to be more clear, for example, in Line 211, how can the author derive the Eq.(29)? __Answer:__ Thank you for your comments. $$ \Phi _{\text{SCLS}}(\pmb{X})= \min _{\tilde{\pmb{\beta}}} \max _{\pmb{u}}\frac{1}{n}\big( \pmb{u}^\top\pmb{X}\tilde{\pmb{\beta}}+\psi(\tilde{\pmb{\beta}}, \pmb{u})\big),\tag{28} $$ where $\psi(\tilde{\pmb{\beta}}, \pmb{u}):=\pmb{c}^\top\tilde{\pmb{\beta}}\cdot\pmb{u}^\top \pmb{z}-\frac{2\pmb{u}^\top\pmb{\epsilon}}{\sqrt{\gamma}}-\frac{\Vert\pmb{u}\Vert^2}{4}.$ Given that the entries of $\pmb{X}$ are drawn i.i.d. from $\mathcal{N}(0, 1)$ and $\psi(\tilde{\pmb{\beta}}, \pmb{u})$ is a convex-concave function, the $\textbf{PO}$ problem (28) satisfies the conditions of Theorem 2.6. Consequently, we replace the challenging $\textbf{PO}$ problem (28) with a simplified $\textbf{AO}$ problem using CGMT: $$\begin{align*} \phi _{\text{SCLS}}(\pmb{g},\pmb{h}) =& \min _{\tilde{\pmb{\beta}}} \max _{\pmb{u}}\frac{1}{n}\big(\Vert\tilde{\pmb{\beta}}\Vert\pmb{g}^\top\pmb{u} + \Vert\pmb{u}\Vert\pmb{h}^\top\tilde{\pmb{\beta}} +\pmb{c}^\top\tilde{\pmb{\beta}}\cdot\pmb{u}^\top \pmb{z}-\frac{2\pmb{u}^\top\pmb{\epsilon}}{\sqrt{\gamma}}-\frac{\Vert\pmb{u}\Vert^2}{4}\big)\\\\ =&\min _{\tilde{\pmb{\beta}}} \max _{\pmb{u}}\frac{1}{n}\big[(\Vert\tilde{\pmb{\beta}}\Vert\pmb{g}+\pmb{c}^\top\tilde{\pmb{\beta}}\pmb{z}-\frac{2\pmb{\epsilon}}{\sqrt{\gamma}})^\top\pmb{u} + \Vert\pmb{u}\Vert\pmb{h}^\top\tilde{\pmb{\beta}}-\frac{\Vert\pmb{u}\Vert^2}{4}\big],\tag{29} \end{align*}$$ (Please refer to lines 129-131 and 207-212 of our main paper.) __Weakness 4.__ The asymptotic results of CGMT relies on $d$, while Theorems 3.1 and 3.2 holds when $\lim _{n\to \infty}$. Could the authors say something about this? __Answer:__ Thank you for your comments. Definition 2.5 indicates that $n=n(d)$ where $n(d)$ is a funtion of $d$. Additionally, Theorems 3.1 and 3.2 require $\lim _{n\to\infty}\frac{d}{n}\in (0,1)$. Therefore, $\lim _{n\to infty}$ implies $\lim _{d\to \infty}$. The transformation from $n$ to $d$ doesn't affect our theoretical results. It is worth emphasizing that all deductions should be based on $d$, if we replace $n$ with $d$. ### In regards to your questions: __Question 1.__ How can the authors derive Eq.(29)? __Answer:__ Thank you for your comments. Question 1 is similar to Weakness 3. Please see the Answer to Weakness 3. __Question 2.__ Could the authors specify the feasibility of replacing $d$ required by CGMT with $n$ in their conclusions. __Answer:__ Thank you for your comments. Question 2 is similar to Weakness 4. Please see the Answer to Weakness 4. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. My issue has been resolved, and I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your acknowledgment and efforts.
Summary: The spherically constrained least squares reformulation (SCLS) method proposed in paper [3] is the state-of-the-art method for solving the Stackelberg prediction game with least squares loss (SPG-LS), and the paper [3] has won the ICML 2022 Outstanding Paper Award. This paper further enhances the theoretical framework of the SCLS method by providing a theoretical error analysis using the Convex Gaussian Min-Max Theorem (CGMT). The theoretical results indicate that the learner obtained through the SCLS method reliably converges to the actual learner vector of the Stackelberg prediction game. Additionally, the authors conduct experiments to validate their theorems, with the experimental results aligning with their theoretical predictions. [3] Jiali Wang, Wen Huang, Rujun Jiang, Xudong Li, and Alex L. Wang. Solving Stackelberg prediction game with least squares loss via spherically constrained least squares reformulation. In ICML, volume 162, pages 22665–22679, 2022. Strengths: $\bullet$ __Originality__ 1. This paper provides a theoretical error analysis of the SCLS method proposed by the paper `` Solving Stackelberg prediction game with least squares loss via spherically constrained least squares reformulation ’’, which won the ICML 2022 Outstanding Paper Award. 2. This paper introduces the transformation from the Primary Optimization (PO) problem to the Auxiliary Optimization (AO) problem. 3. This paper applies the Convex Gaussian min-max theorem. $\bullet$ __Quality__ The quality of this paper is good. $\bullet$ __Clarity__ The overall expression and proof process of this paper are very clear. $\bullet$ __Significance__ The theoretical results further ensure the effectiveness and accuracy of the SCLS method proposed by [3]. This topic is interesting and important. Weaknesses: $\bullet$ This paper transforms a PO problem related to the error of the SCLS method into a simplified AO problem, but the motivation for this transformation seems weak. More specific reasons for this transformation are needed. $\bullet$ According to Theorem 2.6, the equation between lines 154-155 should be a result of asymptotic convergence. However, the condition ``$n \to \infty$’’ is missing in the description. $\bullet$ According to (2), the authors translate the findings of the approximated SCLS problem to that of the original SCLS problem if $\Vert\tilde{w}-\tilde{w}_0\Vert\to 0$ happens independently. Which place reflects that this condition will occur? Technical Quality: 4 Clarity: 3 Questions for Authors: $\bullet$ None. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: $\bullet$ Yes, the authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Answer to Reviewer rMtE Dear Reviewer rMtE, Thank you for your job in reviewing our paper. We are very sorry for the inconvenience caused by our presentations. To this end, following your comments, we correct our work in the revision. In regards to the weaknesses: __Weakness 1.__ This paper transforms a PO problem related to the error of the SCLS method into a simplified AO problem, but the motivation for this transformation seems weak. More specific reasons for this transformation are needed. __Answer:__ Thank you for your comments. Theorem 2.6 indicates that, if the optimal cost $\phi(\pmb{g},\pmb{h})$ of $\textbf{AO}$ concentrates to some value $\mu$, the same holds true for $\Phi(\pmb{G})$ of $\textbf{PO}$. Furthermore, under appropriate additional assumptions, the optimal solutions of the $\textbf{AO}$ and $\textbf{PO}$ problems are also closely related by $\Vert\pmb{\beta} _{\Phi}(\pmb{G})\Vert = \Vert \pmb{\beta} _{\phi}(\pmb{g},\pmb{h})\Vert$, as $n\to \infty$. This suggests that, within the CGMT framework, a challenging $\textbf{PO}$ problem can be replaced with a simplified $\textbf{AO}$ problem, from which the optimal solution of the $\textbf{PO}$ problem can be accurately inferred. Moreover, we are more concerned about $\Vert \pmb{\beta} _{\phi}(\pmb{g},\pmb{h})\Vert$. Therefore, if we reduce the AO problem only involves scalar variable about $\Vert \pmb{\beta} _{\phi}(\pmb{g},\pmb{h})\Vert$, we obtain the error of the SCLS method by the relationship $\Vert\pmb{\beta} _{\Phi}(\pmb{G})\Vert = \Vert \pmb{\beta} _{\phi}(\pmb{g},\pmb{h})\Vert$. If the optimal solution of optimization (33) is $\Vert\tilde{\pmb{\beta}}\Vert=\rho^*$, we have $\Vert\tilde{\pmb{\beta}} _{\phi _{\text{SCLS}}}\Vert\stackrel{P}{\longrightarrow}\rho^*$ for $\textbf{AO}$ problem (29). Then, by virtue of CGMT, $\Vert\tilde{\pmb{\beta}} _{\Phi _{\text{SCLS}}}\Vert\stackrel{P}{\longrightarrow}\rho^*$ also holds for $\textbf{PO}$ problem (28). If $\rho^*$ further satisfies $\rho^* = 0$, based on the relationship between the original and approximated SCLS in Section 3.1, we have $\Vert\tilde{\pmb{w}}-\tilde{\pmb{w}} _0\Vert\stackrel{P}{\longrightarrow} 0$ for SCLS problems (21) and (6). Therefore, it only remains to obtain the optimal value of $\rho$ in optimization (33) that plays the role of $\Vert\tilde{\pmb{\beta}}\Vert$. (Please refer to lines 152-157 and 236-241 of our main paper.) __Weakness 2.__ According to Theorem 2.6, the equation between lines 154-155 should be a result of asymptotic convergence. However, the condition ``$n \to \infty$’’ is missing in the description. __Answer:__ Thank you for your comments. The equation between lines 154-155 is a further explanation of Theorem 2.6 and the condition ``$n \to \infty$’’ is included in Theorem 2.6. __Weakness 3.__ According to (2), the authors translate the findings of the approximated SCLS problem to that of the original SCLS problem if $\Vert\tilde{w}-\tilde{w} _0\Vert\to 0$ happens independently. Which place reflects that this condition will occur? __Answer:__ Thank you for your comments. Our Theorem 3.1 reflects that this condition happens independently. Specifically, Theorem 3.1 demonstrates that $\lim _{n\to \infty}\Vert\tilde{\pmb{w}}^*-\tilde{\pmb{w}} _0\Vert \stackrel{P}{\longrightarrow} 0$ for the approximated SCLS problem. Then, according to the relationship (25), we can translate the findings of the approximated SCLS problem to that of the original SCLS problem. --- Rebuttal Comment 1.1: Title: response Comment: The author did a good job in answering my questions. I am happy to recommend the acceptance. --- Reply to Comment 1.1.1: Title: Appreciation to Reviewer rMtE Comment: We deeply appreciate your acknowledgment, praise, and efforts.
Rebuttal 1: Rebuttal: ## The references mentioned in the Rebuttal to Reviewer PFXk. __For Weakness 1.__ [B1] Estimating the Error of Randomized Newton Methods: A Bootstrap Approach. ICML 2020 [B2] $\ell _{1, p}$-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods. ICML 2015 [B3] Addressing Function Approximation Error in Actor-Critic Methods. ICML 2018 [B4] Faster Algorithms and Constant Lower Bounds for the Worst-Case Expected Error. NeurIPS 2021 [B5] A Comparison of Hamming Errors of Representative Variable Selection Methods. ICLR 2022 [B6] On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning. ICLR 2020 [B7] Generalization error of spectral algorithms. ICLR 2024 [B8] Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap. ICML 2018 __For Weakness 3.__ [C1] Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean Estimation and Linear Regression. NeurIPS 2023 [C2] Differentially Private Algorithms for Learning Mixtures of Separated Gaussians. NeurIPS 2019: 168-180 [C3] Some Constructions of Private, Efficient, and Optimal K-Norm and Elliptic Gaussian Noise. COLT 2024 [C4] Convergence of the EM Algorithm for Gaussian Mixtures with Unbalanced Mixing Coefficients. ICML 2012 [C5] Sparse Gaussian Conditional Random Fields: Algorithms, Theory, and Application to Energy Forecasting. ICML 2013 [C6] Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed. ICML 2021: 8936-8947 [C7] On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions. NeurIPS 2023 [C8] Precise Error Analysis of Regularized M-Estimators in High Dimensions. IEEE Trans. Inf. Theory, 2018 __For Question 7.__ [D1] Precise error analysis of regularized m-estimators in high dimensions. IEEE Trans. Inf. Theory. 2018. [D2] The Noise-Sensitivity Phase Transition in Compressed Sensing. IEEE Trans. Inf. Theory. 2011 [D3] Positivity-preserving entropy stable schemes for the 1-D compressible Navier-Stokes equations: First-order approximation. J. Comput. Phys. 2022 [D4]. On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation. ICLR 2024 [D5] Tracking MPC Tuning in Continuous Time: A First-Order Approximation of Economic MPC. IEEE Control. Syst. Lett. 2023 [D6] Sharp MSE Bounds for Proximal Denoising. Found. Comput. Math. 2016
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CALVIN: Improved Contextual Video Captioning via Instruction Tuning
Accept (poster)
Summary: The paper focuses on captioning movie scenes, which unlike typical captioning tasks, videos from movies can be captioned in a way that tells the story of the movie. For example, captions from movie scenes should convey how the characters felt, and not just the detail in the image. To be able to caption in this way, the authors propose and train CALVIN, a 7B LVLM, on MAD (a movie-audio dataset) as well as other datasets. They then experiment with a variety of techniques to experiment with CALVIN in various settings. Strengths: * The core contribution of this paper is a large model to caption movie scenes, which is meaningfully different from captioning typical videos. The weights of the model can contribute to further study this difference, or further explore movie captioning. * Big improvements over existing work * There are lots of experiments and ablations, testing the model in prompting, test-time setups, and even few-shot finetuning (i.e., personalization toward a specific movie) Weaknesses: * I am not sure I can say there’s technical novelty in this work (this is not a deal breaker, since existing knowledge was applied to create a research artifact that did not exist prior). However, I do not see a statement about releasing weights (only code), and if it will not be released, I am not sure what the contribution of the paper is. Technical Quality: 3 Clarity: 3 Questions for Authors: What do you do with frames that introduce new information specific to the movie? For example, how could a model correctly predict the ground truth for frames that introduce names of characters? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I do not see where in the paper core limitations of CALVIN or the methodology are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback, GsDL. We're happy to answer your questions regarding our paper's key contributions, model weight release, and how to adapt the model to new movies. **[Main Contribution of the paper]** While the model weights are important (and we are working to release them), our work also includes a number of key contributions concerning architectural choices for video LLM, model training, and dataset choice and setup. On the architectural side, we show that we can simplify/unify complicated input pipelines from prior work with a single Q-former, and we highlight the importance of continued training in the LLM part of the model. We believe this can help video-understanding researchers make calls while training the next-gen models. For example, we are one of the few video-understanding papers that calls out the importance of data cleaning and data mixtures which is also echoed by recent works such as MM-1 [1], and PaliGemma [2]. We further introduce novel few-shot adaptations of the model for new movies which can make the model useful in practice and take it beyond the academic realm. **[Weights release]** This issue is currently under legal review. As a backup option, we are also currently working with a partner to release a non-official version of the weights for the camera-ready. **[How to predict new information specific to the movie?]** Great question. This is exactly what we tried to answer in Section 5. In our experiments, we found CALVIN was predicting character emotions and physical states well but it is hallucinating the names and locations when we do not provide the context on a new movie. Hence in section 5.2, we few-shot finetune on some scenes with main characters and ground-truth ADs, this led to non-trivial improvement in the performance even without context. We believe this could be a way to adapt CALVIN for new movie audio descriptions. [1] - McKinzie, Brandon, et al. "Mm1: Methods, analysis & insights from multimodal llm pre-training." arXiv preprint arXiv:2403.09611 (2024). [2] - Beyer, Lucas, et al. "PaliGemma: A versatile 3B VLM for transfer." arXiv preprint arXiv:2407.07726 (2024). ------- Thank you again for your thoughtful review. We hope we addressed your concerns/questions sufficiently and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the response. **Main Contribution of the paper** Thank you for clarifying. I do maintain my belief however, that the implementation of CALVIN is not novel. Careful data cleaning (and data mixtures) is not a specific issue for video and there are countless papers that study its importance in adjacent fields, such as LLM pretraining. To the best of my understanding, the core contribution of this paper is not to study the importance of quality data in training, nor does it contribute tools to clean data specific to video, so I do not see how this furthers the field of video understanding on the axis of data cleaning. Similarly, if the main point of the paper is to show a single Q-former is enough to replace a complex pipeline, then I do not believe the text reflects that. For example, the abstract does not indicate any type of novelty on the training method, but it does emphasize the utility of CALVIN. Regarding the few-shot adaptations, I agree they are useful, but this is not data cleaning or related to the pretraining of the model (including the dreambooth-like setup). **Weights Release** I am glad to hear that, I do believe it has a non-trivial impact on the value of this work. **How to predict new information specific to the movie?** Thanks, this does answer my question. For the reasons above, I choose to maintain my score. --- Reply to Comment 1.1.1: Title: Thank you Reviewer GsDL Comment: We thank the reviewer for your thoughtful feedback. We note your concerns and we will update the abstract and introduction to highlight a bit more on architectural and training innovations (LLM continued training and Fewshot fine-tuning) we introduce in the paper. Please let us know if you have any additional questions we can address.
Summary: This paper introduces a specialized video LLM named CALVIN, which leverages previous movie context to generate contextual scene descriptions. The authors achieve this by training their model on both image QA tasks and video captioning tasks within a unified framework. Experiments demonstrate that with mixed training data and context during inference, CALVIN outperforms the previous state-of-the-art in the audio description task. Strengths: 1. The paper is well-written and easy to follow. 2. This paper explores the use of video LLM in audio description, demonstrating significant performance gains over the previous SOTA. 3. The paper presents detailed ablation studies, which are helpful in understanding the effectiveness of CALVIN. Weaknesses: 1. The method appears similar to VideoLLaMA except for differences in data usage and the base LLM. The authors should provide a more thorough discussion on how their approach differs from previous methods. 2. The experimental results in Table 1 and Table 2 are not entirely fair due to the use of different training data and pre-trained models. The data seems critical for CALVIN's performance, as shown in Table 3. Therefore, it would be better to align data usage with the compared methods or at least highlight the types of data used by other models. 3. Is it practical to use context from captions, given that ground-truth captions do not exist? What is the performance of using generated captions as context for CALVIN? Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback, Y2oh, we're happy to answer your questions regarding a better description of the architectural differences between CALVIN and VideoLLaMA, and regarding further dataset ablations. **[Clarification of Differences to VideoLLaMA and other recent video architectures]** The first main difference between CALVIN and other models is how the video representations are projected onto the Language model space. While previous methods such as videoLLama used 2 projection Q-formers (with one component frozen and another trained from scratch) in sequence, we simplified the pipeline with a single Q-former that is trained from scratch. The second main difference is the LLM finetuning and the usage of LoRA. Our experiments showed that unfreezing the LLM in stage 2 can lead to significant performance gains, while previous methods such as VideoLlama had that component frozen throughout the training. Many recent works such as MM-1 [1], Cambrian-1[2], and PaliGemma [3] have shown that while some critical architectural choices such as positional embeddings or certain activations could be important, most of the gains came from the data mixtures. Compared to previous works, we are also the first ones to propose the use of image VQA data in training and the conversion of caption data into synthetic VQA via LLMs and to show that adding these into training can improve performance. Overall, thank you for bringing up this question. We will add this discussion to our related work section. **[Dataset comparisons]** We provide a short table on the types of data used in the last stage of training of important baselines in the following table. Model | Datasets used | -------- | ------- | VideoLlama | Webvid-2M, CC595k Auto-AD I | CC3M, Webvid-2M, AudioVault-AD (3.3M), MAD CALVIN | CC3M, Webvid-2M, Llava (600k), MAD While the scale of data used in the training of all the models is almost the same, how the data is used varies across the models. We shared our data learnings in the paper (Appendix A), showing that cleaning the data or removing certain types of data can improve performance significantly. We also modified the data into instruction-tuning format and this led to gains. We also thoroughly ablated different types of training setups and shared our learnings in our paper. We believe while our architecture and data slightly differ from the baselines, they are of similar scale, and our smart data curation and our thorough studies led to the best freeze-train configuration to achieve the final model. With even more compute, we would have liked to run additional ablations of the other models with our data, but this is not something we could budget in. In any case, we've included the exact data mix for all models in Table 1 and Table 2 with an additional column, to clarify to readers that these models do differ in their pre-training data. **[How to do contextual captioning when ground truth context doesn’t exist?]** Excellent question. This is quite a plausible scenario and this is the inspiration for Section 5. We saw a drop in performance when we used generated captions in context, however, the performance is still better than the baseline model as well as Calvin with no context. We present the contextual captioning with self-generated ground truth in the following table. (We only presented Cider since that is the only metric available in Auto-AD paper for this scenario.) Model | Context length| Cider | -------- | ------- | ------- | Auto-AD I | 3 | 14.5 Auto-AD 2 | 3 | 19.5 Calvin | 3 | 19.9 We observed the main reason for this performance drop is that many ADs contain either the first names of the characters or locations and without this knowledge in context, it is hard for models to predict correct names, and they sometimes hallucinate. Please refer to Section 5 to see our proposals to get around this issue with few-shot finetuning. [1] - McKinzie, Brandon, et al. "Mm1: Methods, analysis & insights from multimodal llm pre-training." arXiv preprint arXiv:2403.09611 (2024). [2] - Tong, Shengbang, et al. "Cambrian-1: A fully open, vision-centric exploration of multimodal llms." arXiv preprint arXiv:2406.16860 (2024). [3] - Beyer, Lucas, et al. "PaliGemma: A versatile 3B VLM for transfer." arXiv preprint arXiv:2407.07726 (2024). ------- We thank the reviewer for a thoughtful review. We hope we addressed your concerns/questions sufficiently and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. Regarding the first concern, although the authors listed some differences, I still find the implementation of CALVIN not novel enough. However, I appreciate the efforts made in optimizing the method through data and structure tuning, which indeed contribute to better performance in practice. For the second question, I understand the challenges in making a fair comparison. However, including at least one such ablation study would strengthen the paper. Lastly, I am pleased to see the new results in this setting, as they will serve as a valuable baseline for future studies. Overall, I am inclined to raise my score to 6. --- Rebuttal 2: Title: Thank you Reviewer Y2oh Comment: We thank the reviewer for raising the score. We agree that the new experiment has further enhanced the analysis. We will include all the new results in the appendix of the camera-ready version.
Summary: This paper addresses the task of contextual video captioning, with a particular application emphasis on settings for film and tv, where audio descriptions can be useful for making these mediums more broadly accessible. Standard vision-language models are often verbose (problematic since audio descriptions must fit between dialogue) and are prone to hallucinations (e.g., of actor names, or of entities not present in the scene but may be thematically related). The authors propose CALVIN, a model that incorporates pretraining on a range of vision-language tasks (image QA, video captioning, etc.), instruction tuning to improve captioning behavior, and some final adaptations (e.g., prompt engineering, few-shot adaptation, etc.). They observe improvements on the MAD-eval (for film) and TVC (for television) datasets, and qualitative results show more terse descriptions with fewer hallucinations. Strengths: Overall, there are a number of strengths to this work: `+` The domain of contextual description for films and television is important and can have broad positive impacts. It is also a relatively new space in the broader space of dense captioning. `+` The proposed model examines some sensible explorations of the VLM design space for this task setting. Among other things, the adjustments to the training recipe (i.e., which data goes in which training stage) seem to have a good impact, in ablations. `+` Benchmark results on two different datasets, important since MAD-eval only provides pre-computed features (due to copyright), so having an additional dataset without this restriction helps to better clarify differences with other SOTA VLM models. `+` Shows good performance results, quantitatively and qualitatively. Weaknesses: However, there are some key areas of weakness, as follows: `-` The paper focuses on reducing hallucinations and verbosity, but the metrics provided (across the model and model-free/n-gram based ones) do not necessarily correlate or show that this is the reason that the metrics have gone up. Similar prior work (e.g., AutoAD series) have done extensive explorations of metrics around character relevance, and stronger model-based metrics. Given that many of these metrics/analyses have been released by prior work, and the research focus of this paper, the empirical results would be significantly stronger if such analyses were performed for here (and ideally, in a consistent way to make an explicit comparison with prior work). `-` The paper presents a large set of ideas, but the individual novelty of each element is not clear, and it would be good to have clearer comparisons with the closest prior work that introduced the relevant idea (in as much of an apples-to-apples fashion as is possible). As one example, the incorporation of IMDB metadata (for actors) as a small set of exemplars has been explored by prior work (specifically, AutoADII); an ablation based on this approach seems like it would be an important reference point for characterizing the relative improvement with the slightly different style proposed here. `-` The overall task of audio description also cares about the localization of the captions along with the captions themselves. While prior work (e.g., in the AutoAD series) also considered this aspect, this is notably missing from this architecture. It is also a bit unclear why, given that time tokens have been explored for similar VLM models already. Relatedly, the paper would benefit from improving the clarity of the exact inputs, in terms of the temporal regions that are provided to the model for captioning and how these are selected. (There is some language around the choice of the few-shot examples, but this comment is more broadly speaking across all settings). Technical Quality: 3 Clarity: 3 Questions for Authors: Overall, this work is borderline, with a preliminary rating of borderline+. The discussion phase will be important for this work, so if the authors could address some of the areas for clarification that are raised above in weaknesses, particularly with respect to prior work in this space, this would be helpful towards finalizing the rating. Note that for many of the points the review above notes a specific example for illustration, but the comments do extend more generally. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have mentioned some limitations of their method, but not in great detail (there is a short, general statement in the final paragraph of the conclusion). This can be expanded in the supplement, for example, addressing the limitation that the model does not seem to output a clear localization of the caption. --- **Post-rebuttal update:** The authors largely addressed the concerns raised in the rebuttal phase. After consideration of this and other reviewers' comments, I've raised my rating further. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer suoE, thank you for your detailed review. We're glad that you found our contextual captioning model insightful. We do think that our thorough analysis of all parts of the video LLM pipeline, from improved data cleaning and handling, to clearly ablated training recipes and architectural simplifications are a key contribution of our paper taken together as a whole, and we believe that this paper would be a good resource for next-gen contextual video LLMs. **[Character analysis]** The character analysis performed in Auto-AD II is to check the performance of their character recognition module not the character recognition in the final caption. We noticed that Bert score is highly correlated to getting the character name right in the audio description (as Bert embeddings seem to be biased towards proper nouns). Prompted by your question, we also performed an analysis on the percentage of times the model predicts the name correctly, and we saw a score of 72% for our best model. Please note that in the rest of the 28% of the time, in high proportion, the names are replaced with “he” or “she” which are not technically incorrect. We are unable to compute this Auto-AD series of models since the model checkpoints nor the model outputs are available. **[Verbosity]** Prompted by your question, we computed the token count using TikToken for Table 2 (since outputs of baselines from Table 1 are not available), we present the average token counts on the test set for the models in the table below Model | Avg num of tokens (lower better) | -------- | ------- | VideoLlama | 82 MovieChat | 126 VideoLlava | 64 Ours | 24 **[Hallucinations]** What we noticed is, that most of the models suffer from “factual errors” as discussed in Liu et al [1]. To the best of our knowledge, there are no benchmarks or tools to evaluate hallucinations in a free-form generated caption on a random video. However, to address your question, we used Gemini Vision Pro as a judge to give a hallucination score between 1 to 10 for a given generation. We prompted Gemini with some examples from [1]. We took the video clip from Fig 1 of the main paper (a scene from the Battle Los Angeles movie), which in turn has 6 ADs associated with it. We present the average scores given by Gemini in the table below. We see Calvin is significantly better than other OS models in terms of the level of hallucination according to the Gemini model as judge rating. Model | Avg hallucination score by Gemini (lower better) | -------- | ------- | VideoLlama | 8.7 MovieChat | 9.1 VideoLlava | 6.8 Ours | 4.3 We will extend all the above-mentioned analyses with full details to the appendix of the final version of the paper **[CALVIN comparisons to prior work]** Calvin and Auto-AD II share most of the training data and whatever differs, the scale of the data is the same. While AutoAD II has additional aspects of incorporating character and identity information, ours is a simplified VideoLLM that takes a scene representation and pure text context. We do think that it is a key strength of our work to look for general-purpose improvements only through contextual information, without specializing the system further with e.g. actor information. While using IMDB actor data in Calvin is an interesting thought, we would like to point out that, despite using that information, Auto-AD II’s results are worse than Calvin’s, highlighting the advantages of the generic approach. We believe our model’s high performance is due to data cleanup, conversion of existing data into instruction tuning data, and a well-tuned model due to our exhaustive ablations. **[Temporal localization of AD]** Thanks for raising this point. Both AutoAD II and Calvin take a truncated video clip from the movie and previous context to produce Audio Descriptions. We observed that whether an AD should be written for a given clip or not depends on three factors 1) the subjectivity of the annotator 2) the Length of the clip 3) How different the clip is from the previous scene. The AutoAD II paper also observed that pauses longer than 10 sec have an AD while pauses less than 4 sec most likely do not have an AD. We also observed similar trends. AutoAD II looks at 30-sec proposals along with audio and video data in the time interval and makes a prediction for each 5-seconds whether AD exists or not. We approached this problem slightly differently as we believed a simpler solution was possible. First, we only look at the “pauses” in the audio of the movie. We trained a simple classifier head on top of our Q-former layer which classifies whether a clip needs AD or not. Along with the vision embeddings, we also input the length of the clip and the L2 distance of the current clip with the previous and next clip. We finetuned the classifier head for 2 epochs on MAD training data. We saw slightly better results compared to AutoAD-2. We present the results in the table below. Since AutoAD II code of temporal classification is not available, we present the best numbers from the corresponding paper for this model. Model | AUROC (higher better) | -------- | ------- | AutoAD II | 0.78 Ours | 0.8 We believe it is not possible to 100% accurately predict whether AD is needed in a given pause or not for shorter durations since it is quite subjective and we noticed differences even within movies in the eval-split (which are most likely annotated by different humans). We will add this additional analysis with exhaustive details to the camera-ready version. [1] - Liu, Hui, and Xiaojun Wan. "Models see hallucinations: Evaluating the factuality in video captioning." arXiv preprint arXiv:2303.02961 (2023). ------- Thank you again for your thoughtful review. We hope we addressed your concerns/questions sufficiently and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response to my review! I will be updating and finalizing my rating after all discussion periods are complete, but this response is good overall and I believe strengthens the quality of the work. The temporal localization and additional discussions are much appreciated. Below, a quick response to one subset of the rebuttal above: *Re: analysis, verbosity, and hallucinations*: The analyses provided by the author are helpful to better contextualize the method. - I recognize that there are not good hallucination metrics for open-ended video captions, but the character analysis is one that (at least in terms of the subject of the caption) does help to address this, especially given the movie domain. For general context, when the model does not mention the correct character, what are examples of the incorrect outputs that the model does provide instead? (also to confirm, are the pronoun substitutions aligned to the character identity?). - It's unclear how well-correlated the hallucination scores (with the VLM "judge") are with human judgements (especially since VLMs can hallucinate themselves), but this analysis can still be useful. If the authors have additional examples that qualitative show the different scores that the model-based method outputs and can include that here/in their final supplement, that would be much appreciated. --- Reply to Comment 1.1.1: Title: Thank you Reviewer suoE! Comment: We are glad to hear the reviewer is satisfied with our rebuttal. We address your queries below- > when the model does not mention the correct character, what are examples of the incorrect outputs that the model does provide instead? (also to confirm, are the pronoun substitutions aligned to the character identity?). We used Spacy for extracting Proper Nouns from each ground-truth AD to conduct character analysis. From the qualitative analysis of mistakes, we noticed that many a time, the character name is replaced with pronouns. We also observed a small percentage of instances where the model incorrectly refers to characters, for example, calling `Lisa` as `Susan` even though there is no character named `Susan` exists in the movie. This is perhaps a bias inherited from the training data or LLM itself. To quantify the correctness of predicted pronouns, we did not find any off-the-shelf tools to associate character names with pronouns. From qualitative analysis, the outputs seem reasonable most of the time. We will add this analysis, all the experiments done for rebuttal and brief discussion regarding the open problems in evaluations to the appendix of the paper. > If the authors have additional examples that qualitative show the different scores that the model-based method outputs and can include that here/in their final supplement, that would be much appreciated. To give you a sense of scores, we provide some scenes from the analysis, CALVIN's predictions and Gemini's score in the table below. We see in current form VLM judge penalizes when a name is missing or the description diverges too far from the original description. We believe an in-depth analysis is needed to perfect the prompting of VLM-judge by looking at many diverse scenarios (which is a study on its own and too out of scope for this paper). We will add these examples and a discussion to the appendix of the camera-ready version of the paper. Ground Truth | Calvin prediction | VLM Judge score (lower better) --- | ----| ----| Swinging around, Lenihan aims his gun at the sky but sees nothing. | The soldier points the gun and looks at an apartment | 5 Behind him, an alien emerges from the pool. | A robot is in the pool | 3 Lenihan wheels around. | Lenihan turns | 1 Please let us know if you have any additional questions we can address.
Summary: Note: Raised score by 1 point after reading reviews, responses and the concerns addressed in the rebuttal phase. ---- This work introduces a video LLM model that can describe movie scenes in context incorporating names of characters and generate short contextual descriptions. They train a model on data from image question answering datasets and video description using context from previous frames, these enable their model to generate better contextual descriptions of events. They evaluate their model on Movie Audio Description (MAD) and TV-Caption datasets and show improved performance. Strengths: * Their model tuning strategy of using context of the video is well motivated. * Their tuned model shows good performance on both the MAD-eval dataset and TV-caption datasets. * Ablations are explained clearly and evaluated well. Weaknesses: * With regard to evaluations, for movie audio description task, the CMD-train and and CMD-eval [1,2] datasets are also available. It’s not specified why the CMD dataset has not been used in this work, at least for eval. * It seems valuable to have a baseline with the closed MultiModal LLMs e.g. GPT-4o and Gemini 1.5 Pro all of which claim to have superior video captioning capabilities. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Have you evaluated on the CMD-eval dataset[1,2]? do you find the performance to be different? If not, is there a specific reason for not utilizing the CMD-AD eval dataset? 2. Do you evaluate on any closed multmimodal LLMs? If you tried, did you face any issues? If not, is there a reason not to try? [1]Bain, Max, Arsha Nagrani, Andrew Brown, and Andrew Zisserman. "Condensed movies: Story based retrieval with contextual embeddings." In Proceedings of the Asian Conference on Computer Vision. 2020. [2] https://www.robots.ox.ac.uk/~vgg/research/autoad/ Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed at the end of the appendix. The reviewer’s understanding is that this woul dhave to move to the main paper. Please check the guidelines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer jqA2, thank you for your positive review of our work, and your interest in our modeling strategy. There were a few questions raised regarding dataset choices and evaluation of API models that we're happy to answer below: **[Evaluation of the CMD dataset]** CMD is a classic video dataset, but we did not evaluate this dataset for several reasons. First, and most pragmatic, the download links to the raw data are no longer publicly available. Second, the CMD dataset’s descriptions are metadata from the YouTube clip, Wikipedia, and IMDb. While these descriptions give a general sense of the scene, they are not strictly scene audio descriptions. Finally, in the CMD paper, the authors evaluate models for retrieval tasks but not for text generation tasks. While we hold the dataset in high regard, it is not so clear to us how well performance on the CMD eval split measures the fine-grained AADs that we're looking for with modern models. However, similar to another related dataset CinePile[1], the CMD dataset might prove useful in the pre-training stage of the model. While pretraining is a bit out of scope for us within the rebuttal phase, we will provide the results of the model variants of these two datasets in the mix for the camera-ready version. **[Evaluation of closed MultiModal LLM for the task]** Due to copyright reasons, the MAD dataset provides only CLIP embeddings but not the raw videos thus preventing us from evaluating closed MM-LLM models directly on MAD. This makes it impossible to test this dataset with general-purpose multi-modal APIs, which only accept video input (not CLIP embedding input). Nevertheless, we acquired scenes from 2 movies of the test set from YouTube - How Do You Know and Battle Los Angeles (approx 300 AD scenes) for this rebuttal and ran Gemini Pro. We present Google Gemini Pro and CALVIN’s results on this subset below. We chose Gemini Pro as a commercial model since it performs better than GPT-4o according to recent video understanding benchmarks[1,2]. While Gemini’s results are better, CALVIN’s numbers are close behind for a much smaller model, trained with research computing and public data. Since it is not known what is in its training mix, Gemini may have some level of memorization of the evaluated data while CALVIN never saw this data in training. We will add these results to the appendix in camera-ready. Model | BertScore | CIDER | ROUGE-L | SPICE | -------- | ------- | -------- | ------- | ------- | Gemini | 43.23 | 28.11 | 18.54 | 8.99 Calvin | 41.42 | 26.90 | 17.88 | 8.23 [1] - Rawal, Ruchit, et al. "Cinepile: A long video question answering dataset and benchmark." arXiv preprint arXiv:2405.08813 (2024). --------- Overall, we appreciate the reviewer's insights, and these additional experiments have added depth to our analysis. We hope we addressed your questions sufficiently and we would be grateful if you would consider raising your score. Do you have any additional questions we can address? --- Rebuttal Comment 1.1: Title: Satisfactorily addressed raised concerns Comment: Thanks for the response and additional numbers! I also looked at the concerns raised by other reviewers, thanks for thoughtful responses to those reviews as well! It's unfortunate that there aren't other datasets to properly evaluate CALVIN against Multimodal LLMs. I definitely appreciate the quick experiment with 2 movies that you were able to get hold of, and it is good to be able to put CALVIN's performance in perspective with these much larger and expensive off-the-shelf API models. Please include these discussions in the main paper or the supplement along with some notes on future direction of how one can evaluate models on this task. I will increase my score based on the response --- Reply to Comment 1.1.1: Title: Thank you! Comment: We are glad to hear that you are satisfied with our response. We will include all the additional studies conducted for the rebuttal in the appendix of the paper. Additionally, we will add a discussion in the conclusion section addressing the limitations of the current MAD dataset and the need for a new 'raw video' dataset.
Rebuttal 1: Rebuttal: We would like to thank all four reviewers for their highly constructive feedback! We very much appreciate their assessment that our paper has “**thorough ablations**”[jqA2,suoE,Y2oh,GsDL], “**good performance**”[jqA2,suoE,Y2oh,GsDL], “well motivated”[jqA2], “important…novel problem”[suoE, GsDL], “well-written”[Y2oh]. The reviewers raised many excellent and thought-provoking questions and we conducted many experiments to address these questions. This interaction added more depth to the paper. ### [A brief summary of questions and additional results] Here we summarize the additional analyses we conducted as part of the discussion. All the new analyses will be added to the appendix of the camera-ready version of the paper. 1. **Evaluation of closed MultiModal LLM for the task:** Since MAD dataset does not provide “raw” video data, we collected videos from YouTube for 2 movies from the test set and evaluated Gemini-Pro on the same. Results show Calvin has comparable performance (albeit slightly worse than Gemini) despite being a much smaller model. (For more details, see response to ) 2. **Character analysis, verbosity, and hallucination aspects of captions:** For verbosity, we computed the average number of tokens of predictions (from Table 2). For character analysis, we computed the percentage of times the model can predict the proper nouns correctly. For hallucinations, we used the Gemini model as a judge to predict the level of hallucinations in the prediction against the ground truth. In all the metrics, our model Calvin did better than other open-source models. (For more details, see response to Reviewer suoE) 3. **Temporal localization of AD:** We trained a simple classifier head on top of Q-former and are able to get comparable results to AutoAD-II (For more details, see response to Reviewer suoE) 4. **How to do contextual captioning when ground truth context doesn’t exist?** Section 5 in the main paper is dedicated to this. Additional results show Calvin outperforms baselines in this scenario as well. (For more details, see response to Reviewer Y2oh) Here are two points raised by couple of reviewers that we want to re-emphasize. We will add this discussion to the related work section in the camera-ready version. ### [Are baseline comparisons reasonable?] We believe the comparisons with baselines are reasonable due to the following two factors: 1. The scale of the training data is the same, although the data mixtures differ, which is a core contribution of our paper. We also discussed in detail how to clean the data to improve performance, which was not addressed in previous works. 2. The size of the previous models and Calvin is the same. Videollama and Calvin have a similar number of parameters. Our improvements result from thorough experimentation and study of different components of the model pipeline. ### [How is Calvin different from previous methods?] The first main difference between CALVIN and other models is how the video representations are projected onto the Language model space. While previous methods such as videoLLama used 2 projection Q-formers (with one component frozen and another trained from scratch) in sequence without any justification for this choice. We simplified the pipeline with a single Q-former that is trained from scratch. Merging of parameters into single module simplifies the architecture for thorough ablations, and we also found that the results are slightly better if there is 1 Qformer vs 2 Qformer blocks, perhaps due to slight reduction in number of parameters. The second main difference is the LLM finetuning and the usage of LoRA. Our experiments showed that unfreezing the LLM in stage 2 can lead to significant performance gains, while previous methods such as VideoLlama had that component frozen throughout the training. Many recent works such as MM-1 [1], PaliGemma [2] have shown that while some critical architectural choices such as positional embeddings or certain activations could be important, most of the gains came from the data mixtures. Compared to previous works, we are also the first ones to propose the use of image VQA data in training and the conversion of caption data into synthetic VQA via LLMs and to show that adding these into training can improve performance. ----- In conclusion, our work includes a number of key contributions concerning architectural choices for video LLM, model training, and dataset choice and setup. On the architectural side, we show that we can simplify/unify complicated input pipelines from prior work with a single Q-former, and we highlight the importance of continued training in the LLM part of the model. We believe this can help video-understanding researchers make calls while training the next-gen models. For example, we are one of the few video-understanding papers that calls out the importance of data cleaning and data mixtures which is also echoed by recent works such as MM-1 [1], and PaliGemma [2]. We further introduce novel few-shot adaptations of the model for new movies which can make the model useful in practice and take it beyond the academic realm. [1] - McKinzie, Brandon, et al. "Mm1: Methods, analysis & insights from multimodal llm pre-training." arXiv preprint arXiv:2403.09611 (2024). [2] - Beyer, Lucas, et al. "PaliGemma: A versatile 3B VLM for transfer." arXiv preprint arXiv:2407.07726 (2024).
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy
Accept (poster)
Summary: The paper introduces an LLM agent that can solve the game of Press Diplomacy only by self-play, without fine-tuning or regularizing on human data. The method simplifies the previous architecture from the PIKL/CICERO papers by FAIR (https://openreview.net/forum?id=F61FwJTZhb) by avoiding using reinforcement learning and human demonstrators. Given that the authors use commercially available LLMs (GPT4 and LLAMA) I contribute the success of the method to the alignment of the LLMs to human preferences and communication style. Additionally, they exploit the planning capabilities of current LLMs to bypass the need for RL. Strengths: 1. The introduction of sub-goals is an interesting feature. It acts as in a chain-of-thought mechanism, which is proven to improve the LLM output and avoid hallucinations. Smaller reasoning steps are more robust than long-horizon planning. 2. The paper proposes a novel architecture with a memory buffer that acts like a pseudo-RAG to improve the context of the LLMs information. 3. The authors provide a very complete repo to reproduce the experiments. Weaknesses: 1. Well, the paper benefits from the improvement of LLMs over time, as compared with GPT2 (used by CICERO). The alignment to human data is not needed as now the LLMs have the alignment incorporated by the vendor. Same for the planning step, the LLMs intrinsically got better at planning so now we can demote RL for free. 2. When evaluating against CICERO, the win rate over CICERO is not that high with your model winning the game only (<1%) of the time. I believe the win rates are not impressive, but the fact that the victories are achieved by a more lightweight model that is cheaper than training CICERO. As it doesn't require human data. 3. The alignment with humans playing and conversational style is hard to assess as it will require having your model to play against humans in an online platform and assessing whether players realize they were playing against an AI player. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. See weakness above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have not provide any analysis on the model limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Well, the paper benefits from the improvement of LLMs over time, as compared with GPT2 (used by CICERO). The alignment to human data is not needed as now the LLMs have the alignment incorporated by the vendor. Same for the planning step, the LLMs intrinsically got better at planning so now we can demote RL for free. **A:** We appreciate your point regarding the improvements in LLMs and their impact on our work. As illustrated in Figure 5, while the enhanced alignment in LLMs indeed boosts performance (GPT-4 is better than others), we observed that a vanilla GPT-4 still falls short in AI diplomacy without our framework (Table 2). This indicates that the alignment in LLMs lays a foundation, but our approach is key to unlocking the models' potential in social simulation. > When evaluating against CICERO, the win rate over CICERO is not that high with your model winning the game only (<1%) of the time. I believe the win rates are not impressive, but the fact that the victories are achieved by a more lightweight model that is cheaper than training CICERO. As it doesn't require human data. **A:** We agree that the main contribution of this work is providing a low-cost paradigm to address AI diplomacy without human data. In the revision, We will highlight the data efficiency in the experiment section. Moreover, we observed a large gap by comparing the scores the agents gained. Our agent's score is about 10% higher than Cicero's. > The alignment with humans playing and conversational style is hard to assess as it will require having your model to play against humans in an online platform and assessing whether players realize they were playing against an AI player. **A:** Thanks for your suggestion. Considering the ethical issue, we will conduct experiments to play with human players in our future work after getting official permission. You can refer to the dialogue shown in Fig. 6~7 and the one-page PDF for more vivid conversation examples. --- Rebuttal Comment 1.1: Title: Improved rating:7 Comment: I would like to thank the authors for clarifying my questions and engaging with interest. Since my concerns were clarified I have upgraded my rating to 7. This is a good paper that merits an accept due to its contributions over a widely researched topic such as Diplomacy and negotiation. --- Reply to Comment 1.1.1: Title: Appreciation for Your Support Comment: Thank you for your kind words. We sincerely appreciate your positive feedback and the improved rating. Your acknowledgment of our work serves as a strong motivation for us to continue striving for excellence in our future work. --- Reply to Comment 1.1.2: Title: Request for Verifying Rating Update Comment: We are writing to express our sincere gratitude for your positive feedback and upgraded rating to **7**. However, we have noticed that **the system remain reflects the previous rating of 6**. We understand that such oversights can happen, and we are reaching out to kindly bring this to your attention. Could you please verify if there has been any update or if there is a need for further action on your part to ensure that the revised rating is accurately reflected in the system? Your assistance in this matter is greatly appreciated. Thank you once again for your valuable feedback and for your understanding.
Summary: This paper focuses on diplomatic activities using LLM-based societal agents without relying on human data. It introduces a new paradigm for AI diplomacy agents that can improve through self-play and experience collection. The new agent, Richelieu, achieves state-of-the-art performance compared to current methods and demonstrates generalizability across different LLMs. Strengths: This paper presents a new AI agent for diplomatic scenarios that surpasses all previous methods. This agent can improve autonomously without relying on human data. And the accompanying figures effectively illustrate the concept. Weaknesses: - There are some typos (line 61, line 69), and certain citations (line 80, line 237, line 276) are not properly functioning. - Some experimental details are still unclear. - Which base model is utilized in the experiment for Figure 4 and Table 1? Is the improvement over all other baselines attributed to the agent paradigm or to the strong capabilities of GPT-4 or LLaMA3 compared to previous base models like Cicero, which only used a 2.7B model? - In the main experiment for Table 1, when Richelieu plays as a randomly selected country, what model or agent represents the other countries? - In the ablation study, what are the effects of blocking each module, such as only blocking the sub-goals module? - Techniques such as memory and planning in building agents have been proposed previously, which reduces the novelty of the method. Technical Quality: 2 Clarity: 2 Questions for Authors: - Is the social reasoning flow the same or different for the planner and negotiator parts? - Can this paradigm generalize to other fields in social simulation? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > There are some typos (line 61, line 69), and certain citations (line 80, line 237, line 276) are not properly functioning. **A:** We will fix them in the revision. > Which base model is utilized in the experiment for Figure 4 and Table 1? **A:** We use GPT 4 as the base model for our agent in the experiment. > Is the improvement over all other baselines attributed to the agent paradigm or to the strong capabilities of GPT-4 or LLaMA3 compared to previous base models like Cicero, which only used a 2.7B model? **A:** Despite the improvement in the capabilities of the large language model (LLM) being of significant help to the enhancement of our model's capabilities, merely relying on the LLM cannot achieve very good results. When we initially used the LLM directly, it was unable to make correct decisions. Our ablation study (line 315) showed that as our model improved, its negotiation and reasoning abilities also improved (Table 2). At the same time, we also analyze the generalization of our framework by using different LLMs. The results (Figure 5) showed that despite the differences in the final outcomes due to the varying capabilities of the different LLMs, the self-evolving mechanism can steadily improve the agents' performance. Cicero used extra human data for training the dialogue and planning modules separately. In contrast, Richelieu relies on no human data, and a direct application of LLM e.g. GPT-4 is unsuccessful as the experiment shows. It is the proposed combined paradigm that enhances LLM models like GPT-4 to achieve complex planning capability, as demonstrated by the experiments. > In the main experiment for Table 1, when Richelieu plays as a randomly selected country, what model or agent represents the other countries? **A:** In the experiment, we randomly select 3 or 4 countries to be played by Richelieu, and the other countries will be controlled by Cicero. > In the ablation study, what are the effects of blocking each module, such as only blocking the sub-goals module? **A:** Without a sub-goal, Richelieu will be very shortsighted and will tend to favor short-term gains over long-term gains. In the planner, to formulate a sub-goal, Richelieu needs to consider long-term benefits. Therefore, making decisions based on the sub-goal can naturally ensure that the decision includes consideration of long-term benefits. But after blocking this module, most of the decisions made by Richelieu are to occupy as much territory as possible in the current turn, similar to a greedy algorithm. Moreover, Richelieu's ability to handle relations with other countries will decline. > Techniques such as memory and planning in building agents have been proposed previously, which reduces the novelty of the method. **A:** The memory and planning modules do not work well without the self-play data and reflection mechanism for LLM agent models as shown in Table 2. Such an integrated self-evolving scheme for LLM agents achieves high performance without human data, which has not been verified in previous work on AI diplomacy. > Is the social reasoning flow the same or different for the planner and negotiator parts? **A:** A major difference between our model and traditional models is that there is no need to separately establish a decision-making model and a negotiation model. Therefore, in our model, the social reasoning result applied in the process of planning and negotiation is the same. Our model will perform social reasoning at the beginning of each turn. We will analyze the current state, the strengths and weaknesses of each country, and infer the strategic intentions of each country. Based on these, we will speculate which country can be our potential ally and which country will be our adversary. The social reasoning result will first be used to establish its own sub-goal. Then, during the negotiation phase, negotiations will be conducted based on the results and goals. During the negotiation process, the other party's words may cause us to modify the results. > Can this paradigm generalize to other fields in social simulation? **A:** Our framework is capable of applying to most social interaction tasks. Most components in our framework can be easily generalized to a new task by modifying the content. Social reasoning enables the agent to handle complex and dynamic social relationships. The negotiation pipeline opens the potential of communicating with others to prob the other's mind or reach a consensus. The hierarchical strategy with reflection enhances the ability to handle long-term planning. The self-evolving mechanism (reflection with self-play memory) further improves the overall performance without manual supervision. These modules cover most of the challenges in multi-agent interactions, e.g., werewolf games, economic games, and daily interactions. We further adopt our framework to a ***werewolf game***. The results demonstrate our reasoning framework achieves comparable results to the other methods. Due to the time limitation, we do not apply the self-play game in the current version. To be specific, in the experiment, we let our agent play as a werewolf in a seven-player game, where there are two werewolves, one witch, one seer, one guard, and two villagers. The experimental results show that the win rate of our agent is 59.2%. For comparison, the specifically designed LLM-based agent achieved ~65% win rate. --- Rebuttal 2: Title: Looking forward to your feedback Comment: Dear Reviewer raEc and EuFx, I hope this message finds you well. As the author discussion period is nearing its end, we kindly request the opportunity to address any further feedback you may have regarding our response to submission 3239. We have made substantial clarifications and provided additional results based on your initial comments. If you have any further questions or require clarification on our response, please let us know. Your insights are crucial for the improvement and assessment of our work. Considering the final score, we kindly ask for a potential score improvement if you believe our response has addressed your major concern. We greatly appreciate your time and expertise and look forward to your response. Best regards, Authors of Submission 3239
Summary: The paper presents “Richelieu,” a self-evolving large language model (LLM)-based agent designed for the game of Diplomacy. Richelieu integrates strategic planning, social reasoning, and memory reflection to handle complex multi-agent environments without relying on domain-specific human data. The model self-evolves through self-play games, demonstrating its effectiveness. Strengths: 1. This paper studies an interesting social science problem and introduces an LLM-based paradigm to build an AI Diplomacy agent. 2. This paper proposes a self-evolve strategy through self-play without human data. 3. The presentation of this paper is good, clearly describing the core idea of the paper. Weaknesses: 1. Lack of in-depth analysis of the middle process. In the proposed framework, the Social Reasoning and Planner with Reflection play an important role. For example, I wonder if the LLM can accurately model relationship and inferring intention. The author could provide more analysis or examples of these modules to prove the effectiveness of LLM on this task. 2. Some technical details are not clear. For example, in the process of experience retrieval, both object similarity and state similarity are considered, but no descriptions of their implementations. 3. While the paper demonstrates the effectiveness of Richelieu, scalability to larger and more diverse environments remains to be fully explored. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The generation process of LLM is uncertain, which may lead to unstable reasoning and reflection, and finally inconsistent results. Does this paper consider this problem and how to deal with it? 2. Could you further elaborate on the significance of self-play games in the self-evolution process? 3. How to consider both object similarity and state similarity in the process of experience retrieval? 4. Could you provide examples of how Richelieu adapts its social reasoning and negotiation tactics based on past interactions? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses and Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Lack of in-depth analysis of the middle process. If the LLM can accurately model relationship and inferring intention. The author could provide more analysis or examples of these modules to prove the effectiveness of LLM on this task. **A**: The LLM can accurately model relationships and inferring intention. We conduct an experiment to evaluate the success rate that the agent can successfully identify the social relationship and infer others' intentions. As the baselines do not explicitly model the relationship and intention, we can not directly access the ground truth for evaluation. Instead, we let all players use our agent but with different LLMs, i.e., 4 countries use GPT-4 and 3 countries use Llama3. The accuracy is reported in the following: | | GPT-4 |Llama3| | ---- | ---- |----| | relationship | 85.74% |85.52%| | intention (sub-goal) | 74.67% |74.11%| We can see that the accuracy of social reasoning is consistent with the overall performance of the agent, indicating the effectiveness of social reasoning. In the one-page PDF, we further provide an example case to demonstrate the effect of the self-play games. It shows that the agents evolved by reflection with memory collected in the self-play games can handle a long-term planner. > How to consider both object similarity and state similarity in the process of experience retrieval? **A**: The overall similarity is the weighted sum of the two similarity metrics. $$ S = \lambda S(s_t,s_{t'})+(1-\lambda)S(\chi_{i,t},\chi_{i,t'}) $$ In our implementation, $\lambda=0.65$. $s$ is the description of state and $\chi$ is the sub-goal at corresponding state. we select the top $m$ experiences from the memory buffer with the highest similarity $S$ to the current turn. Historical turns similar to the current turn state and sub-goal can reflect and optimize the decisions made in the current turn. We will add more details in the revision. > Scalability to larger and more diverse environments remains to be fully explored. **A**: Thanks for your suggestion. Our framework is capable of applying to most social interaction tasks. Most components in our framework can be easily generalized to a new task by modifying the content. Social reasoning enables the agent to handle complex and dynamic social relationships. The negotiation pipeline opens the potential of communicating with others to prob the other's mind or reach a consensus. The hierarchical strategy with reflection enhances the ability to handle long-term planning. The self-evolving mechanism (reflection with self-play memory) further improves the overall performance without manual supervision. These modules cover most of the challenges in multi-agent interactions, e.g., werewolf games, economic games, and daily interactions. We further adopt our framework to a ***werewolf game***. The results demonstrate our reasoning framework achieves comparable results to the other methods. Due to the time limitation, we do not apply the self-play game in the current version. To be specific, in the experiment, we let our agent play as a werewolf in a seven-player game, where there are two werewolves, one witch, one seer, one guard, and two villagers. The experimental results show that the win rate of our agent is 59.2%. For comparison, the specifically designed LLM-based agent achieved about 65% win rate. > The generation process of LLM is uncertain, which may lead to unstable reasoning and reflection, and finally inconsistent results. Does this paper consider this problem and how to deal with it? **A**: In experiments, we set a temperature of 0.3 to ensure a relatively stable generation of LLM policies. The overall reasoning framework also ensure the stability and consistency in the AI agent's performance. Besides, we find that the state-of-the-art LLM (GPT-4 or Llama 3) can deal with this problem well, as is shown in Fig. 5. > The significance of self-play games in the self-evolution process. **A**: The reflection module highly relies on historical experiences to guide the generation of effective sub-goals. Thus, the diversity of the memory will lead to the success of the reflection. The self-play games can help the agent autonomously explore different experiences and collect them in the memory, which is fundamental for the whole self-evolving process. In this way, we can build an agent without human training data or any existing agents for the task. The results in Figure 5 show that as self-play progressed, Richelieu's win rate continued to increase until it reached a stable value. Its effectiveness can also be verified by the results of the ablation study (Table 2). Moreover, we also provide an example in the one-page PDF, showing that self-play memory can guide the agent to consider the long-term effect of the strategy. > **Could you provide examples of how Richelieu adapts its social reasoning and negotiation tactics based on past interactions?** **A**: Examples are given in the one-page PDF. --- Rebuttal Comment 1.1: Title: Reply Comment: Thank the authors for your clarifications. I will increase my score from 5 to 6. --- Reply to Comment 1.1.1: Title: Thanks Comment: We sincerely appreciate the insights you’ve shared for this work and are grateful for your consideration in raising the score.
Summary: This paper propose a new framework for LLM-based agents to play diplomacy games and improve themselves. The proposed framework, Richelieu, have several components and many abilities. The authors perform good experiments and ablation study to show how the framework. Strengths: - The framework have several core components, and I believe the authors build a good agent that can handle the game. - The game or the setting is good for evaluating the comprehensive ability of LLM-based agents. Weaknesses: - What does "evolve" mean in the article? Does it only refer to the storage of memory modules? If so, it seems that the model itself has not been updated. - The baseline in the article is not a LLM-based agent, can a stronger baseline be provided? For example, ReAct, PlanAct, AutoGPT. - The article lacks some references to work in the field and discussion of related work, e.g., [1][2][3]. [1] The Rise and Potential of Large Language Model Based Agents: A Survey [2] ReAct: Synergizing Reasoning and Acting in Language Models [3] ProAgent: Building Proactive Cooperative AI with Large Language Models Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - The self-evolve in the paper is not clear. - For the memory module and self-play module, a more detailed analysis should be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > What does "evolve" mean in the article? Does it only refer to the storage of memory modules? If so, it seems that the model itself has not been updated. **A**: "evolve" means the AI agent's capability and strategy are autonomously enhanced over time without direct human supervision. Beyond memory storage, self-evolution is achieved through several key components, including the self-play game to generate diverse memory, the experience abstract to extract meaningful information from memory, and the reflection mechanism to update the planner. Besides, the introduced thought flow also plays an important role in the overall performance. Thus, we do not need update the parameters of neural network of the base LLMs. We argue such a parameter-free evolution mechanism is more efficient than finetuning the LLM. > The baseline in the article is not a LLM-based agent, can a stronger baseline be provided? For example, ReAct, PlanAct, AutoGPT. **A**: Thanks for your suggestion. As there were no previous works that explored the use of LLM-based agents for AI diplomacy, most of the baselines are RL-based agents. The ablation methods in Table 2 can be regarded as LLM-based baselines by combining different techniques, such as chain of thought, reflection, memory, and etc. We can see that deploying vanilla LLM or common techniques to build an LLM-based agent can not work well in the task. Following your suggestion, we further built an LLM-based agent using AutoGPT and compared it with our agent. In the testing, we randomly select three countries to be controlled by Richelieu, and the other four countries to be controlled by AutoGPT. Note that the agent controls each country independently. The results are given below. | | win | most SC | survived|defeated| | ---- | ---- | ---- | ---- | ---- | | Richelieu_1 | 9.3%|18.2%|37.9%|34.6%| | Richelieu_2 | 9.9%|19.4%|37.7%|33.0%| | Richelieu_3 | 8.1%|17.4%|39.2%|35.3%| | AutoGPT_1 | 1.2% |4.6%|32.4%|61.8%| | AutoGPT_2 | 1.2% |4.2%|34.4%|60.2%| | AutoGPT_3 | 1.5% |4.0%|32.5%|62.0%| | AutoGPT_4 | 2.6% |3.6%|32.3%|61.5%| | | win | most SC | survived|defeated| | ---- | ---- | ---- | ---- | ---- | | Richelieu | 9.10% |18.33%|38.27%|34.30%| | AutoGPT | 1.63% |4.10%|32.90%|61.38%| > The article lacks some references to work in the field and discussion of related work, e.g., [1][2][3]. **A**: We will add more references in the revision. Most previous work focuses on developing the reasoning framework for LLM-based agents, such as ReAct[2], PlanAct, and AutoGPT, to accomplish complex **single-agent** tasks. ProAgent[3] provides a framework for decentralized multi-agent collaboration. In this setting, it assumed the relationship between agents is fixed (collaborative) and the agents do not need to negotiate with others. Differently, AI diplomacy is more challenging, where the relationship is uncertain and dynamically changing, and the agents need to actively negotiate with others and plan long-term strategies to win the game. Hence, the previous work can not directly apply to the task, and we further introduce a new self-evolving LLM-based agent for AI diplomacy. Note that the proposed approach is not limited to specific LLMs or tasks, but a principled framework to enable LLM-based agents to work in complex environments with social interactions. > For the memory module and self-play module, a more detailed analysis should be included. **A**: The memory module will record the state of each turn, as well as the negotiation results and final actions taken by each country. It will also record the state changes over a period of time after this turn, thus determining the impact of the actions. Therefore, we can use the memory to find turns similar to the current turn and reflect and optimize based on the actions taken and subsequent state changes. As self-play progresses, the experiences in the memory modules will accumulate. We can find more similar historical turns as experiences during reflection, thereby enhancing the capabilities of our model. Based on the results from Table 2 and Figure 5, we can see that the memory module and self-play have a significant impact on enhancing the model's capabilities. Moreover, as self-play progresses, the model's capabilities gradually improve and eventually reach a relatively stable level. --- Rebuttal Comment 1.1: Title: Reviewers EuFx and raEc Comment: The authors have responded to your reviews. Have they sufficiently answered your questions, and if not, do you have any further clarifying questions you would like to ask? The author discussion period ends tomorrow (13th).
Rebuttal 1: Rebuttal: Our rebuttal includes a one-page PDF and the following four rebuttals for each Official Review. The PDF contains an example. We show two cases with similar states: the first shows decisions and negotiations made without self-play, and the second shows those made after self-play. This example shows that self-play influences the model's decision-making and negotiation, making it focus more on long-term benefits. Pdf: /pdf/18841fdf17f7a7f590bfd56848d2f431cc6a1e53.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Slot-VLM: Object-Event Slots for Video-Language Modeling
Accept (poster)
Summary: This paper proposes to use object-centric slot representations as visual representations in vision language models. Specifically, it adopts the popular video LLM paradigm and concatenates a pre-trained visual encoder with a pre-trained LLM. It uses a slot attention method to group visual features into object slots and event slots before feeding them into the LLM. These slot representations compress visual features into a small set of tokens and stand out in terms of efficiency. The final model, Slot-VLM, performs well in video question answering tasks and has the potential to handle long videos. Strengths: 1. It's an intuitive and good idea to apply object-centric inductive bias to vision-language models. 2. The experiments in the main paper and appendix are sufficient, assessing various aspects of the method. 3. The performance is surprisingly good. Weaknesses: 1. Are the temporal slots really event slots? Since there are m spatial positions after downsampling, what if an object moves around from one spatial patch to another? Then the event cannot be effectively captured by the slots, since the temporal slot attention happens in the fixed spatial position along the temporal dimension. 2. The visualization seems poor. It was expected that slot attention, as a learnable method, should performs better than naive clustering algorithms, as used in ChatUniV. However, the results shown in Figure 3 do not really group objects as expected. It still looks similar to the visualization of ChatUniV. 3. Regarding object-centric/slot-attention in videos, there are some related works that are not discussed. For example, <Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities, NeurlPS 2023>, <Unsupervised Open-Vocabulary Object Localization in Videos, ICCV 2023>. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. A related work, ChatUniV in CVPR 2024, uses clustering algorithm to merge visual tokens, thereby reduce computational cost. Slot attention can be regarded as an upgraded (learnable) clustering algorithm. Apart from this, what is the key difference between Slot-VLM and ChatUniV? 2. SlowFast network is initially proposed to address the video redundancy problem. While in Slot-VLM, the slot-attention seems to also reduce redundancy by aggregating the visual tokens into compact slot representations. In that case, is it still necessary to use SlowFast? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to weaknesses and questions above. In summary, the object-centric inductive bias used in this work is interesting. It also shows potential on handling long-videos given the token compression strategy and the impressive performance on standard benchmarks. However, the claim of Object-Event Slots in the title is questionable. 1) The object-slots seem to not effectively group objects according to visualization. 2) Event-slots. See weakness point 1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback regarding the idea, sufficiency of experiments, and the surprisingly good performance. We greatly appreciate your insights and suggestions and will thoroughly incorporate them into our revised manuscript. Below, we provide detailed responses. **Q1**: Are the temporal slots really event slots? Since there are m spatial positions after downsampling, what if an object moves around from one spatial patch to another? Then the event cannot be effectively captured by the slots, since the temporal slot attention happens in the fixed spatial position along the temporal dimension. **A1**: Thank you for the insightful question. We understand the concern. Our intention with the term “event slots” is to conceptually align these slots with events at a high-level, as mentioned in lines 203-204 of the manuscript: “Temporal slots aggregate the temporal tokens for explaining parts of the input, similar to identifying events. Thus, we name the learned slots as event-centric slots.” We have considered this question during our design. To alleviate it, instead of learning event slots for each spatial position, we divide the feature map into 4x4=16 non-overlapped large spatial regions (downsampling), with each region corresponding to a large local region of 56x56 pixels when the frame resolution is 224x224. **This allows us to observe the temporal dynamics of a large local region (56x56 pixels), which helps infer the evolution of part of an event within that region.** We acknowledge that this method might not perfectly capture the entire object dynamic, but it allows for a partial understanding of object evolution within a large local region. Through leveraging the learned slots from all 16 large local regions, the LLM can achieve a more comprehensive perception of global temporal dynamics. The temporal slots aggregate semantically consistent temporal segments, akin to how spatial slots aggregate spatially consistent regions (object-centric slots). While the segments within a slot might not encompass a complete event, they are likely part of the same event. We will revise the manuscript to clarify this design consideration and improve the explanation of how event slots work in our model. **Q2**: The visualization seems poor. It still looks similar to the visualization of ChatUniV. What is the key difference between Slot-VLM and ChatUniV? **A2**: Thank you for your insightful question. Yes, the visualization of slot attention is still not very satisfactory. Actually, the unsupervised object-centric representation learning remains a challenging task in the field. Our Slot-VLM significantly outperforms ChatUniV (see Table 1) due to two main reasons. 1) **Slots can preserve more original information than clustering centers.** Slot attention is trained to reconstruct the original features from slots. In contrast, ChatUniV uses the average token within each cluster to represent the corresponding cluster, where the averaging can result in a loss of fine-grained details, whereas the detailed visual information is crucial for accurate video understanding and reasoning. 2) **Slots are learnable and can be jointly optimized within the entire framework, whereas clustering (being not learnable) in ChatUniV is task-agnostic.** Our ablation study below shows that freezing the slot attention subnetworks at Stage 3 in our framework leads to a noticeable drop in performance when compared with our final scheme with slot attention jointly tuned (see Table F). Note that we will add the results that freezing the slot attention subnetworks at Stage 2 and Stage 3 once we obtain, which we guess they are lower than that freezing slot attention subnetworks only at Stage 3. Table F: Ablation study on the influence of end-to-end optimization of the slot attention modules. Slot-VLM (slot frozen) denotes the slot attention modules are frozen in Stage 3. | Models | In-domain (Acc./Score) | MSVD-QA (Acc./Score) | |:----------------------:|:----------------------:|:--------------------:| | Slot-VLM (slot frozen) | 46.2/2.68 | 74.0/3.74 | | Slot-VLM | 48.8/2.75 | 74.9/3.76 | **Q3**: Add related works regarding object-centric/slot-attention in videos. **A3**: Thank you for the helpful suggestion and we will add the following discussion. Some works have also explored object-centric learning in videos. VideoSAUR [A] extends the method of DINOSOUR [27] to video-based object-centric learning by incorporating temporal feature similarity loss, which introduces a motion bias for each frame. Fan et al. [B] propose an unsupervised method for open-vocabulary object localization in videos, where they run only once slot-attention on the entire spatial-temporal features to obtain video slots for all frames. In contrast, we leverage slot attention to learn semantically decoupled tokens as the input to LLMs and investigate the effectiveness of aligning these ‘concept tokens’ with LLM input. (Note [A] and [B] denote the referred two papers, respectively.) **Q4**: Is it still necessary to use SlowFast? **A4**: Yes, we had demonstrated the necessity of our two-branch design in line 309-320 of our manuscript. Table G shows that directly extending slot learning to spatial and temporal features increases memory and computation requirements, and the optimization difficulty. Table G: Ablation study on the effectiveness of joint spatial-temporal slots learning vs. our two-branch design. The FLOPs and number of parameters for the reduced token generation module are presented. | Model | In-domain (Acc.) | MSVD-QA (Acc.) | #FLOPs (G) | #Param. (M) | |:--------------:|:---------:|:-------:|:----------:|:-----------:| | Slot-Joint-VLM | 46.7 | 72.8 | 1304.8 | 13.7 | | Slot-VLM | 48.8 | 74.9 | 41.6 | 27.3 | --- Rebuttal Comment 1.1: Title: Response of rebuttal and further questions Comment: I appreciate the detailed rebuttal from the author. Regarding the definition of the event slots, please ensure to add the clarification into the paper, as this definition is somehow misleading. As for the poor visualization result, I understand that object-centric learning methods struggle to handle real-world scene. But some related works, e.g., <Bridging the Gap to Real-World Object-Centric Learning> already shows decent object grouping results on real-world data. Therefore, I'm still a bit confused about the visualization. If the visualization is not good, then the object-slot and event-slot definition is a bit more questionable. For the conceptual comparison with Chat-UniV, the authors mentioned that "Slots can preserve more original information than clustering centers." and "Chat-UniV ... averaging can result in a loss of fine-grained details". As far as I know, slot-attention module usually uses a very compact slot representation with low embedding dimension to form a information bottleneck. I'm curious about the implementation of the slot attention in the paper. What is the dimension? How do you use the slots to input to the LLM? Why does it preserve more abundant information than average pooling visual tokens? And the last point about SlowFast and Slot-Joint-VLM, since there would be a hyper-parameter to define the number of slots in your model, I wonder if you define the total number of slots as the same in the two settings in Table G, what brings the significant difference in the FLOPs? Thanks. --- Reply to Comment 1.1.1: Title: Response to the questions Comment: Thank you very much for your insightful questions and constructive suggestions, which are very helpful in improving our work. We will add the clarification related to event slots into the revised manuscript. **About the visualization**: As far as we know, DINOSAUR [27] as you referred, which learn the object-centric representation on pre-trained feature space (e.g., DINO, CLIP features) by reconstructing the features, was the state-of-the-art solution on real world data to obtain object grouping results. As the visualization shown in [27], the performance also depends on the complexity of the image contents. It can work well on simple datasets such as MOVi-C, MOVi-E (see Fig. 2 in [27]) but is still not perfect on more complicated images (e.g., COCO). We use the same technique of DINOSAUR to learn slots by reconstructing pre-trained CLIP features. Even though the visualization is still not good, we can observe the learned slots tend to group the features of similar semantics. Our Slot-VLM achieves superior performance, demonstrating strong potential. We believe that more advanced object-centric learning method would further enhance the performance of our framework, and we are eager to have a try if we can use a better object-centric learning method. [27] Bridging the Gap to Real-World Object-Centric Learning, ICML'23. **About preservation of information**: Thank you for the good question. The dimension of a slot is 1024. Slot-attention module generates compact slot representations, where the number of output slots is much smaller than the number of input tokens. Actually, the embedding dimension is **not** low. Take the Object-Slots branch as an example. For a video frame with feature of 16x16x1024 (height x width x dimension), i.e., 16x16=256 tokens, slot attention taken the 256 tokens as input to generate 8 slots with each of 1024 dimensions. After going through the projection layers, the slots for t (t=8) frames are sequentially concatenated to have 64 slots, as the input to LLM, with each slot as a token. Slot attention module is optimized to enable slots to reconstruct the input as better as possible, even though the bottleneck/compression would lead to loss of some information. Each slot is a weighted summarization of the input. In contrast, the forming of clustering center (using pooling/averaging operation) is not optimizable to preserve information. Therefore, at the same number of output tokens, slots are able to preserve more information. **About SlowFast and Slot-Joint-VLM**: Yes, the total number of slots are the same. For the cross-attention operation in slot attention module, we know the complexity is proportional to the number of Key/Value (i.e., number of input tokens). Our decomposition of the spatial-temporal tokens to spatial and temporal enables the slot-attention to operation along each dimension (spatial or temporal), which greatly reduced the number of Key/Value (input tokens) to infer slots. In addition, we have conducted temporal down-sampling in the spatial branch (Object-Slots branch) while preserving the spatial resolution. Similarly, we have conducted spatial down-sampling in the temporal branch (Event-Slots branch) while preserving the temporal resolution. This further reduces the complexity. We will add more analysis to our revision. Thank you again for your great efforts to make our paper clear and solid. Any questions/comments are warmly welcomed!
Summary: This paper aims to learn a higher level of abstract representation as the input tokens to LLM. The paper proposes a dual-branch (object- and event-centric) to extract the concepts and uses the three-stage training paradigm. The proposed Slot-VLM is evaluated on three VQA datasets and has shown state-of-the-art performance. Strengths: - The idea of decomposing the semantics of input tokens to LLM is interesting and crucial for human-like reasoning. - The evaluation results show strong performance on three datasets (Table 1) and better efficiency (4.4), which validates that such structured representations can improve the performance of VQA. - The extensive ablation study (Section D) is meaningful and comprehensive enough. - The details of reproducing the method are comprehensive. Weaknesses: My main concern is that the authors claim the temporal branch of Slot-VLM as the event branch. The slot attention focuses on local regions temporally, which makes it only capture low-level local motion instead of actual events with high-level semantics. Moreover, the visualization of temporal attention masks (Figures 8 and 10) can barely provide cues that the model can learn events-centric representation (L53, L55, L65). Thus, I believe the event-centric slot is overclaimed in the entire article. I hope the authors can clarify this. Technical Quality: 2 Clarity: 3 Questions for Authors: See weakness. The claim of event-centric slot is my main concern. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes, the authors raise the limitations of this paper in two directions: imperfect concept discovery and extension for hours-long videos. However, it would enhance the paper much more if the authors could provide more insights into what other semantic concepts the proposed model lacks, which would complement the existing object- and event-centric representations (Although I am not convinced the model actually captures the “event” semantic) Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and helpful suggestions. We greatly appreciate your recognition of the interestingness and cruciality of the idea, strong performance, meaningful and comprehensive ablation study, and comprehensive reproducibility details. We have carefully considered your valuable suggestions and comments and will incorporate them into our revised manuscript. Please find our detailed responses below. **Q1**: My main concern is that the authors claim the temporal branch of Slot-VLM as the event branch. The slot attention focuses on local regions temporally, which makes it only capture low-level local motion instead of actual events with high-level semantics. Moreover, the visualization of temporal attention masks (Figures 8 and 10) can barely provide cues that the model can learn events-centric representation (L53, L55, L65). Thus, I believe the event-centric slot is overclaimed in the entire article. I hope the authors can clarify this. **A1**: Thank you for your constructive comments. We will revise the descriptions for clarity. Our intention is to conceptually align event slots with events at a high level, as defined in lines 203-204 of the manuscript: "**Temporal slots aggregate the temporal tokens for explaining parts of the input, similar to identifying events. Thus, we name the learned slots as event-centric slots.**" More precisely, we leverage temporal slots to aggregate semantically consistent temporal segments, akin to how spatial slots aggregate spatially consistent regions, termed object-centric slots. While these temporal segments within a slot might not form a complete event, they are likely part of the same event. **About local region**: We understand your concern regarding the focus on local regions. This was a careful consideration in our design process. To alleviate this, instead of learning event slots for each spatial position along the temporal dimension, we divide the feature map into 4x4=16 non-overlapped spatial regions (downsampling), where each region corresponds to a large local region of 56x56 pixels when the frame resolution is 224x224. Observing the temporal dynamics of such a large local region (56x56 pixels) allows for partial inference of event evolution within the region, although it may not be perfect. Note that the temporal aggregation through slots is performed on CLIP features, which possess high-level semantics, thus enhancing the slots’ ability to capture high-level semantics. Moreover, by utilizing the learned slots for all 16 large local regions, the LLM can achieve a global perception of temporal dynamics. **About visualization**: We recognize that the current technique using slot attention for forming objects (spatial) and action/event (temporal) segmentation are not yet satisfactory. In Figure 10 (a), the good news is that similar contents tend to be allocated to the same slot and different slots capture different contents. The patterns from slots (Figure 10 (a)) are much better than those obtained from Q-Former (see Figure 10 (b)). We acknowledge that our work is still far from forming idea events, but we have taken a small step towards that goal. The superior performance demonstrates the strong potential of our proposed idea of decomposing the semantics of video tokens to align with LLM. More joint efforts from the community are still needed to push this field forward. --- Rebuttal 2: Comment: I thank the authors for their response. My main concern about the clarity of "event slots" has not been addressed well, which is the major claim of the paper. As mentioned by reviewer KJVB, the paper only considers "events" that represent the temporal change within a patch. This would limit the proposed method to a simple domain of activities. For example, such a definition of event slots would not be able to handle a scenario where a subject manipulates objects with hands (e.g., Assembly-101 and Ego4D) or kitchen scenarios (e.g., Epic-kitchen) where the subjects and their hands move consistently. I would suggest that the authors revise the claim of the event slot as the term can be misleading. Moreover, from the rebuttal by the authors "slots is performed on CLIP features, which possess high-level semantics, thus enhancing the slots’ ability to capture high-level semantics" is not very convincing. The statement should be substantiated with some qualitative results that can demonstrate the event slots actually capture high-level semantics, as the current ones do not really show meaningful semantics. --- Rebuttal Comment 2.1: Comment: Thank you very much for your great efforts and valuable feedback. We will follow your advice to revise the claim of the event slot to more accurately reflect the capabilities and limitations. Particularly, we will reflect its functioning on "large local regions (e.g., patches of 56x56 pixels)" and capturing of portions of events. We appreciate your constructive feedback, which contributes a lot to the improvement of our work! Thank you!
Summary: This paper introduces a new framework called Slot-VLM that aims to aggregate spatial and temporal relationships between frames for more effective video understanding with Large Language Models (LLM). The Slot-VLM approach aims to generate video tokens that are semantically disentangled for better alignment with the frozen LLM. Specifically, the paper proposes the Object-Slots module to extract object-centric slots from high spatial resolution features that are sampled with a low frame rate. Additionally, it also introduces the Event-Slots module, which helps to aggregate the temporal dynamics between frames by learning event-centric slots from features with low spatial resolution but sampled at a high frame rate. The authors demonstrate the benefits of their proposedSlot-VLM approach by evaluating across three open-ended video question-answering datasets including MSVD-QA and ActivityNet-QA, where it outperforms existing video-language models by a significant margin. Strengths: 1) The model figures are informative and especially helpful in helping the reader to understand the different stages of the learning algorithm as well as the intuition behind each stage. The slot attention approach, from the aggregation of video tokens to the combination of object and events-centric slots as well as their input to the LLM, is described well. 2) While the introduced Slot-VLM bears strong similarities to existing work on hierarchical understanding of videos such as the SlowFast approach, the concept of generating semantically-disentangled video tokens to align with frozen LLMs is relatively interesting and novel. In contrast to existing video-language models which often rely on pooling or compressing information into learnable queries, the ability to generate interpretable attention maps is particularly helpful to visualize what the model is focusing on. 3) In this paper, the authors conduct comprehensive comparisons with state-of-the-art video-language models. These evaluations highlight the limitations of existing work under relatively fair settings of using the same visual and language models as well as similar training data. The results also further emphasize the improvements brought by Slot-VLM. Weaknesses: 1) The proposed Slot-VLM approach appears to be much more effective at aggregating disentangled spatial and temporal relationships between frames, as evidenced by the performance gap between its achieved results and those of Video-ChatGPT and Video-LLaVA in Table 1. However, it is unclear how much of the performance gain is also due to the final number of video tokens that are passed into the underlying LLM for reasoning and generation. It would be helpful to include a comparison of the different number of video tokens used among the different video-language models as well as the number of trainable parameters during each of the training stages. 2) It is also unclear how scalable such an approach will be in handling much longer videos. Currently, it appears that 8 frames are used for the object-slots module. Along with using 8 event-centric slots in total across shorter videos, this may limit the performance of the proposed Slot-VLM approach on much longer videos, such as those used in the EgoSchema evaluation benchmark. Similarly, is there also a more efficient way to aggregate spatiotemporal relationships over more frames beyond just increasing the number of object and event-centric slots since this will increase the computational demand in the LLM? 3) It may be beneficial to the reader to include additional ablation experiments and analysis over the low frame sampling rate used in the Object-Slots Branch as well as the stride used for pooling across the spatial and temporal axes in the Event-Slots Branch. Currently, these hyperparameters appear to be selected based on final downstream performance but there is hardly any discussion on them. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see above-mentioned limitations. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your constructive suggestions, and positive feedback on the novelty and interestingness of our proposed concept, the ability to generate interpretable attention maps, the comprehensive comparisons, and the clarity of presentation. The detailed responses are provided below. **Q1**: Include a comparison of the different number of video tokens used among the different video-language models as well as the number of trainable parameters during each of the training stages. **A1**: We will include these in our revision. Table B below shows the performance gain is not due to the final number of video tokens, where our Slot-VLM uses only 192 tokens that is much smaller than some other models. Table B: The number of video tokens used. ‘Varied length’ denotes the number varied with the video length. ‘-’ denotes we did not find information from their papers and will investigate their code in future to fill. * denotes the likely number based on the paper’s description. | Model | Number of video tokens | |:------------------------:|:-----------------:| | Video LLaMA | 256 | | Video Chat | - | | Video-ChatGPT | 356 | | Chat-UniVi | Varied length | | Video-LLaVA | - | | Video-LLaVA$^†$ | - | | BT-Adapter | 256 | | LLaMA-VID | Varied length | | VideoChat2 | 1536* | | MovieChat | - | | Slot-VLM (Ours) | 192 | | Slot-VLM$^†$ (Ours) | 192 | Different models have different numbers of training stages and trainable parameters per stage. Most papers lack detailed information. For simplicity, we present the trainable parameters for each stage of Slot-VLM and the likely numbers for VideoChat2 based on the paper's description in Table C. Our Slot-VLM uses far less trainable parameters than VideoChat2 and requires less computing resource in training. Table C: The numbers trainable parameters (M: million; B: billion) in each stage for VideoChat2 and our Slot-VLM. | Stage | Stage 1 | Stage 2 | Stage 3 | |:-----------:|:-------:|:-------:|:-------:| | VideoChat2 | 101M | 492M | 7B | | Slot-VLM | 27M | 37M | 33M | **Q2**: It is also unclear how scalable such an approach will be in handling much longer videos. **A2**: **Our approach is directly applicable to videos of a few minutes**. We have validated our framework on ActivityNet-QA (see Table 1), where the average length of video clips is 111.6 seconds, and the maximum length is 285 seconds, which is comparable to the reviewer referred EgoSchema (180 seconds). We also tested a 500-question subset of EgoSchema. Table D shows that our Slot-VLM achieves superior performance, which outperforms Video-LLaVA by 7% in accuracy. Table D: Performance (accuracy) comparison on the subset of Egoschema. | Method | ViperGPT | Sevila | Video-LLaVA | mPLUG-Owl | LLoVi | Slot-VLM | |:------:|:------:|:------:|:------:|:-----:|:------:|:------:| | Accuracy | 15.8 | 25.7 | 36.8 | 33.8 | 50.8 | **55.8** | **For handing even longer videos, our framework can be extended.** For example, for a three-hour video, we could partition the video into chunks, each of $\tau$ seconds, summarizing dense video features into $N_s$ object-centric and $N_f$ event-centric tokens for **each** chunk. These tokens can then be concatenated sequentially as input to LLM for inference. At the setting of $N_s+N_f=192$ tokens per chunk and $\tau=96$ seconds (1fps) duration per chunk, we equivalently have 2 tokens per frame on average, facilitating hour-long video handling. Note that LLaMA-VID (Li et al., 2023b) also encodes each frame into two tokens but overlooks the exploration of temporal correlation (where ours outperforms LLaMA-VID in three benchmarks in Table 1). This strategy ensures that the total number of video tokens is proportional to the video length in a manageable and tolerable way. More efficient strategies for aggregating spatiotemporal relationships over longer videos will require further research and future efforts. **Q3**: Additional ablation over the low frame sampling rate used in the Object-Slots Branch as well as the stride used for pooling in the Event-Slots Branch. **A3**: We conducted ablation studies using single branch settings. For the Object-Slots branch, we tested three frame sampling rates: 4 frames (Object-Slot-VLM (4 frames)), 8 frames, and 16 frames per video. For the Event-Slots branch, we tested four pooling strides to achieve spatial resolutions of 2x2 (Event-Slot-VLM (2x2)), 4x4, and 8x8. Table E shows the results. Increasing the sampled frames or spatial resolutions improves performance but increases the number of video tokens. By default, we use 8 sample frames and a 4x4 spatial resolution to balance complexity (192 tokens in total) and performance. Table E: Ablation studies on sampling different number of temporal frames for the Object-Slots branch, and on using different spatial resolutions for the Event-Slots branch. | Model | # Video tokens | In-domain (Acc./Score) | MSVD-QA (Acc./Score) | |:---------------------------:|:--------------:|:----------------------:|:--------------------:| | Object-Slot-VLM (4 frames) | 32 | 42.13/2.55 | 72.36/3.65 | | Object-Slot-VLM (8 frames) | 64 | 46.5/2.69 | 73.1/3.71 | | Object-Slot-VLM (16 frames) | 128 | 46.82/2.70 | 74.29/3.72 | | Event-Slot-VLM (2x2) | 32 | 39.28/2.44 | 70.86/3.55 | | Event-Slot-VLM (4x4) | 128 | 47.1/2.67 | 73.1/3.67 | | Event-Slot-VLM (8x8) | 512 | 50.35/2.79 | 76.56/3.77 | --- Rebuttal 2: Title: Citations in response A2 Comment: For the schemes in our response A2, the paper for ViperGPT is "ViperGPT: Visual Inference via Python Execution for Reasoning". The paper for LLoVi is "A Simple LLM Framework for Long-Range Video Question-Answering". The paper for mPLUG-Owl is "mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality". --- Rebuttal Comment 2.1: Comment: Thank you very much for your comprehensive efforts in addressing my concerns. In particular, I find your response on extending the proposed approach to much longer videos insightful. I also appreciate your efforts on the additional ablation experiments, given the time and computational constraints on your end, as well as providing more evaluation results on additional QA benchmarks. Thus, I will retain my original rating. --- Reply to Comment 2.1.1: Title: Thank you Comment: We are very grateful for your valuable suggestions and feedback! Incorporating these results and clarifications are very helpful to make this paper more solid and comprehensive. Thank you very much for your great efforts and time!
Summary: This paper proposed Solt-VLM, which aims to generate a small set of semantically decoupled video tokens to comply with LLM for effective video reasoning. It design a dual-branch Object-Event Slots module to learn object-centric slots and event-centric slots to jointly capture the spatial object details and temporal dynamics. Experimental results demonstrate the superiority of our framework and show the effectiveness of using semantics-decoupled representations for aligning with LLM. Strengths: Overall, the main contribution of this paper is introducing slot attention as the multimodal connector between LLM and visual encoders. Even though it is more like a solid technical implementation based on previous works, I think it is a good try on the video-language domain and provides more insights into spatial-temporal modeling in MLLM. Also, the paper writing is good and the main motivation makes sense, the results show both efficiency and effectiveness. Weaknesses: - The proposed Slot-VLM is tested on open-ended MSRVTT-QA, MSVD, and ActivityNetQA with automatic ChatGPT-based metrics. Although the metrics is applied by some previous works, I don’t think this is a stable, reliable, and explainable enough evaluation which may vary a lot across GPT versions (check conclusion in [1]) and may generate different responses even for the same input. - Also MSRVTT-QA/MSVD/ActivityNetQA is not designed from an object interaction perspective, which may not be effective enough for validating the object-centric design. - Based on the previous two points, I would suggest testing on recent multi-choice QA benchmarks, including STAR [2] / NExT-QA [3] / Egoschema [4], which are annotated with clear answer choices, and designed from a human-object interaction view. And compared with recent model works like Sevila [5], MIST [6], LRR [7], GF [8], which are also based on query-former, or iteratively attention on regions in the video. [1] FreeVA: Offline MLLM as Training-Free Video Assistant, Arxiv24. [2] Star: A benchmark for situated reasoning in real-world videos, NeurIPS 2021. [3] Next-qa: Next phase of question-answering to explaining temporal actions, CVPR 2021. [4] Egoschema: A diagnostic benchmark for very long-form video language understanding, NeurIPS 2023. [5] Self-chained image-language model for video localization and question answering, NeurIPS 2023. [6] Mist: Multi-modal iterative spatial-temporal transformer for long-form video question answering, CVPR 2023. [7] Look, Remember and Reason: Grounded reasoning in videos with language models, ICLR 2024. [8] Glance and focus: Memory prompting for multi-event video question answering, NeurIPS 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: please see weakness Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: authors provide limitation discussion in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback on our insights into spatial-temporal modeling in MLLM, the main motivation, and the effectiveness of method, as well as the quality of our writing. We have carefully considered your valuable comments and suggestions and are committed to incorporating them into our revised manuscript. Please find our detailed responses below. **Q1**: The proposed Slot-VLM is tested on open-ended MSRVTT-QA, MSVD, and ActivityNetQA with automatic ChatGPT-based metrics. Although the metric is applied by some previous works, I don’t think this is a stable, reliable, and explainable enough evaluation which may vary a lot across GPT versions (check conclusion in [1]) and may generate different responses even for the same input. **A1**: We acknowledge your comment and will include the comparisons on multi-choice QA benchmarks in our manuscript (see Response A3 below). **Q2**: Also MSRVTT-QA/MSVD/ActivityNetQA is not designed from an object interaction perspective, which may not be effective enough for validating the object-centric design. **A2**: Thank you for your comment and suggestion. We explored additional datasets for validation as discussed in Response A3 below. Note that our framework is expected to generally work well, since in general each image in a video is a capture of the world consisting of objects and backgrounds, where object-centric representations can be considered as the basic units. This can also be intuitively explained by human visual reasoning where object representation acts as a basic and fundamental representation. For humans, visual signals are processed initially through the primary visual cortex (V1) and subsequently integrated into higher-level visual processing areas, resulting in the formulation of complex visual object representations. Such high-level object representations together with brain-stored knowledge are then used for logical reasoning in brain. Similarly, our Slot-VLM generates object-centric and event-centric representations to provide the high-level vision source for effective LLM reasoning, where object-centric design would be generally helpful. The strong performance of our Slot-VLM (Table 1) also demonstrated the effectiveness of our design. In addition, for the three datasets, we found that there are plenty of video-question pairs that are related to querying the states/actions of objects and relations. For example, what is a dog doing? What’s the shape of the table? What are the animals that appear in the video? What is behind the person sitting in the video? Our object-centric design provides an effective bridge for visual and language modeling. **Q3**: Based on the previous two points, I would suggest testing on recent multi-choice QA benchmarks, including STAR [2] / NExT-QA [3] / Egoschema [4], which are annotated with clear answer choices, and designed from a human-object interaction view. And compared with recent model works like Sevila [5], MIST [6], LRR [7], GF [8], which are also based on query-former, or iteratively attention on regions in the video. **A3**: Thank you for the helpful suggestions and we will add the comparisons in our revision. For EgoSchema, we have tested the performance on the 500-question subset and Table A-1 shows the results. Slot-VLM significantly outperforms Sevila by 30% and outperforms the comparable 7B model Video-LLaVA by 19% in accuracy. Table A-1 Performance (accuracy) comparison on the subset of Egoschema. | Method | ViperGPT[9] | Sevila | Video-LLaVA | mPLUG-Owl | LLoVi [10] | Slot-VLM | |:--------:|:------:|:------:|:-----------:|:---------:|:--------:|:--------:| | Accuracy | 15.8 | 25.7 | 36.8 | 33.8 | 50.8 | **55.8** | [9] ViperGPT: Visual Inference via Python Execution for Reasoning [10] A Simple LLM Framework for Long-Range Video Question-Answering For STAR, Table A-2 shows the comparisons. The referred four works [5-8] report the in-domain accuracy where training is performed on STAR. In contrast, without accessing STAR during training, our model is tested in zero shot manner. Our Slot-VLM achieves competitive performance with generalizable models of Flamingo-9B and InternVideo. Note that InternVideo uses a much larger number of videos (12 million) than ours, whereas our model uses only 100K videos for training. Table A-2 Performance (accuracy) comparison on STAR. | | | |:--------------------------:|:----:| | **Dataset specific trained (In domain)** | | | MIST [6] | 51.1 | | GF(uns) [8] | 53.9 | | Sevila [5] | 64.9 | | LRR [7] | 70.5 | | **Zero-shot (Generalization)** | | | Flamingo-9B | 41.8 | | InternVideo | 41.6 | | Slot-VLM | **42.7** | For NExT-QA, Table A-3 shows our Slot-VLM is competitive to VideoChat2. We believe using more training videos like VideoChat2 (1.1 million) would further enhance performance. Table A-3 Performance (accuracy) comparison on NExT-QA. | | | |:--------------------------:|:----:| | **Dataset specific trained (In domain)** | | | MIST [6]| 57.2 | | GF(uns) [8]| 58.8 | | Sevila [5]| 73.8 | | **Zero-shot (Generalization)** | | | InternVideo [11]| 49.1 | | Mistral(7B) [12]| 51.1 | | VFC [13]| 51.5 | | LLoVi [10]| 54.3 | | MVU(13B) [14]| 55.2 | | ViperGPT(GPT-3.5) [9]| 60.0 | | LangRepo(12B) [15]| 60.9 | | VideoChat2 [16]| 61.7 | | Slot-VLM| **62.0** | [11] InternVideo: General Video Foundation Models via Generative and Discriminative Learning [12] Mistral 7B [13] Verbs in Action: Improving verb understanding in video-language models [14] Understanding Long Videos in One Multimodal Language Model Pass [15] Language Repository for Long Video Understanding [16] MVBench: A Comprehensive Multi-modal Video Understanding Benchmark --- Rebuttal Comment 1.1: Title: Thanks for your detailed response and experiments Comment: Thanks authors for their every effort to provide more results on multi-choice QA benchmarks/datasets. Happy to see the proposed Slot-VLM achieves comparable performance with limited resources. Overall, I would highly suggest including that multi-choice exp as a main result in the final version, considering it is a more stable/interpretable evaluation and can provide more grounded hints for further work. On the other hand, I would also suggest including a more inclusive comparison of those multi-choice QA benchmarks, for example, (1) SeViLA achieves higher performance on zero-shot nextqa/star with flant5-3B (2) Videochat2 achieves higher zero-shot performance on star with the similar LLM. Beating absolute sota models generally is not the main research focus/interest from my view, but the effectiveness of the proposed slot design (which is already included in paper ablation studies), however, I believe a comprehensive table (e.g. including connector/LLM types/extra pre-training) provides more interest/solid insights for future module designs. Thanks for the effort again, and I am willing to increase my score based on these rebuttal results. --- Reply to Comment 1.1.1: Comment: Thank you very much for your constructive suggestions. We will include these results in our revised manuscript. Moreover, we will follow your insightful suggestions to present more comprehensive comparisons, including comprehensive results and model/training information. Thank you again for your great efforts to help us improve!
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chair, Thank you very much for your great efforts and insightful feedback on our paper. **We are grateful for your positive feedback on our motivation/insights (Reviewer jNDE, KJVB), idea novelty and interestingness (Reviewer gzPw, wQQx, KJVB), comprehensive experiments and method effectiveness (Reviewer jNDE, gzPw, wQQx, KJVB), and paper writing and comprehensive details (Reviewer jNDE, gzPw, wQQx).** We have carefully considered each of your comments and suggestions. We provide detailed responses to address the concerns. Particularly, we summarize our responses regarding each reviewer’s main concerns/comments below: 1) Based on **Reviewer jNDE**’s concern on ChatGPT-based metrics and suggestion on testing on recent multi-choice QA benchmarks, we have conducted additional zero-shot testing on recent multi-choice QA benchmarks and provide results (see A3). These results demonstrate the robustness and generalizability of our approach. 2) Based on **Reviewer gzPw**’s suggestions and comments, we have added the ablation study on the influence of frame sampling rate and strides, respectively (see A3). We also show that our method is scalable in handling longer videos, e.g., EgoSchema (see A2). This scalability ensures the applicability of our method to diverse video datasets. 3) Regarding the questions of **Reviewer wQQx and KJVB** about the event slots, we will clarified the definition and add more explanation in the revision (see A1 to Reviewer wQQx and A1 to Reviewer KJVB). Temporal slots aggregate the temporal tokens for explaining parts of the input, similar to identifying events. Thus, we name the learned slots as event-centric slots. More precisely, we leverage temporal slots to aggregate semantically consistent temporal segments, akin to how spatial slots aggregate spatially consistent regions, termed object-centric slots. While these temporal segments within a slot might not form a complete event, they are likely part of the same event. 4) Regarding the questions of **Reviewer KJVB**, we have analyzed the reason why Slot-VLM significantly outperforms ChatUniV (see A2) and demonstrated the necessity of the two-branch design (see A4). The analysis underscores the advantages of our architectural choices in enhancing video-language modeling. We believe that incorporating your valuable insights will significantly enhance the quality and clarity of our paper. We hope our responses adequately address your concerns and look forward to any further feedback you may have. Please feel free to share any additional questions or concerns! Many thanks, All authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Neural Characteristic Activation Analysis and Geometric Parameterization for ReLU Networks
Accept (poster)
Summary: This paper identifies and attempts to solve the problem of unstable activation boundaries in ReLU neural networks. The authors define a relu unit in terms of its activation boundary which is the set of inputs that cause the preactivation to be zero. The point on this hyperplane closest to the origin (termed a CAB) is used to track each neuron's activation boundary. The authors show that standard parameterization, weight norm and batch norm all suffer from an instability in the direction of the activation boundary when the weights are small since the direction of the weight can change sign. To address this, they propose to operate in hyperspherical coordinates where the angles $\theta_1, ... , \theta_{N-1}$ and radius $\lambda$ are used to define the hyperplane instead. Taking gradient steps in this geometric parameterization maintains bounded changes in the angles under small learning rate due to a nice metric property of this coordinate system. The Strengths: This paper identifies an issue with ReLU neurons that can lead to a training instability at large learning rates and attempts to solve the problem with an elegant parameterization motivated by switching to a spherical coordinate system. The computational cost is of the same order as the original matrix computation, so it is still efficient to compute, unlike other approaches such as natural gradient descent. Next, the parameterization is size independent which is a nice property not enjoyed by SP networks. Further, the authors provide many experiments which show that the locations of the activation boundaries are more stable under their proposed parameterization, leading to a speed up in optimization. Weaknesses: There are some assumptions in the analysis that I was unsure about (such as worst case direction of the perturbations. If instead perturbations were randomly oriented with respect to $w$, does it change the conclusions?). The authors also mention that other normalization solutions suffer from similar problems (like Layernorm etc) but this would be useful to demonstrate, perhaps in the Appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Some of the stability analysis is performed under the worst case assumption that $\epsilon \propto w$. Wouldn't this be unlikely to happen for high dimensional inputs? The main instability is caused when the vectors $w$ change sign/direction under a single step of GD. Could this issue also be solved by initializing the weights in SP to have larger variance relative to the learning rate? 2. Have the authors considered what the structure of the gradients look like in this parameterization? Naively it seems like some of the gradient entries would have very small magnitude in high dimension since they involve products of bounded trigonometric functions. Is there a "limiting" large width description in this parameterization like for NTK parameterization or $\mu$P? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments on our paper! We will address your comments point by point below. > It would be useful to demonstrate that other normalization solutions suffer from similar problems (like Layernorm etc), perhaps in the Appendix. Thank you for your suggestion. In the camera-ready version, we will add derivations for these to the Appendix. Here, we take Layernorm as an example and illustrate why it suffers from similar problems. The key of the proof for Layernorm is to realize that the only difference bewteen Layernorm and BN is that they normalize the input tensors along different axes (i.e., BN normalizes inputs across the batch axis and Layernorm normalizes inputs across the feature axis). Therefore, the proof for Layernorm is the same as the proof for BN, except that the expectation and variance operators are now computed along the feature axis. > If perturbations $\varepsilon$ were randomly oriented with respect to $w$, does it change the conclusions of the analysis? What about high dimensional inputs? No, the conclusion does not change. This is because the angle between $\varepsilon$ and $w$ can only be between 0 and 180 degrees in any dimensional space. Therefore, even if $\varepsilon$ was randomly oriented with respect to $w$ in high dimensional space, there would still be a 50% chance that the angle between them is greater than 90 degrees, in which case the gradient update will be unstable as the direction of the characteristic boundaries/spatial locations is significantly changed. > Could this instability issue also be solved by initializing the weights in SP to have larger variance relative to the learning rate? Unfortunately, no. This is because SP is extremely sensitive to the variance of the initialization distribution as discussed in Remark 3.5 of the paper. If we did not strictly follow the variance as suggested in Glorot/He initialization and used a larger variance instead, the final performance of SP would be very poor. > Have the authors considered what the structure of the gradients look like in this parameterization? Thank you for this insightful question! Empirically, we find that the structure of the activations and their gradients under GmP do exhibit some sparsity pattern, and we conjecture that this is one of the reasons why GmP has good generalization. Interestingly, even with the product structure and some sparsity pattern, we find that empirically the activations and gradients will not vanish during training if we initialize the GmP parameters from the von Mises–Fisher distribution (i.e., uniformly distributed on the hypersphere) as discussed in Remark 3.5. We conjecture that GmP may have a large width limiting behavior similar to NTK or $\mu$P, which we are currently investigating in a follow-up project. We will add a discussion on this to the camera-ready version. We thank the reviewer again for their positive and insightful comments which have helped us improve this work. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses to my questions. I will maintain my positive score.
Summary: This paper introduces a novel approach for analyzing the training dynamics of ReLU networks by examining the characteristic activation boundaries of individual ReLU neurons. The authors figure out the instability in common neural network parameterization and normalization during stochastic optimization. To address this, the authors propose Geometric Parameterization (GmP), which parameterizes the unit vectors by hyperspherical coordinate representation rather than scaling. The authors conduct experiments to verify the effectiveness of the method. Strengths: This paper proposes a parameterization method based on spherical coordinate transformation. It is novel to analyze neural networks from a geometric perspective, and the method solves the problem of instability during stochastic optimization. Besides, this paper is easy to follow writhing, generally well presented. Weaknesses: Lack discussions on other activation functions. Technical Quality: 4 Clarity: 4 Questions for Authors: Have the authors explored how GmP performs on other activation functions? Does it also perform better? Suppose we use GmP on the hidden neurons rather than the weights, like scaling operations projecting $x$ onto a hypersphere. In other words, we use $\theta_1,\cdots,\theta_n$ to represent $x$ by spherical coordinate system. Can the authors give some analysis theoretically or experimentally? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback on our paper! We will address your comments point by point below. > Discuss GmP on other activation functions. Our characteristic activation analysis is a very general idea that can be applied to a broader family of activation functions beyond ReLU (e.g., Leaky ReLU, ELU, SReLU, etc) with exactly the same argument. This is because our definition of spatial locations is essentially any breakpoints in a piecewise activation function, since those breakpoints are the sources of nonlinearity. We will add this explanation to the camera-ready version. We leave the characteristic activation analysis for smooth activations for future work since we specifically focus on ReLU-like activations as indicated in the title of our paper. > What if we use GmP on the hidden neurons rather than the weights? Thank you for this insightful question! We did consider directly representing the hidden neurons in the hyperspherical coordinate, since this is conceptually a more efficient representation. However, we find that it is very difficult to train such neural networks since gradient-based optimization is no longer applicable. We tried to use rejection sampling to allocate the hidden neurons on the hypersphere, but it is inefficient and suffers from curse of dimensionality in high dimensional space. With that being said, we do think that this could potentially be a promising direction and are currently working on a more efficient learning algorithm for allocating hidden neurons on the hypersphere as a follow-up project. We will add a disucssion of this in the camera-ready version. We thank the reviewer again for their encouraging and insightful comments which have helped us improve this work.
Summary: Summary. Authors consider training neural nets with Adam after a change of coordinate to spherical coordinate system. They show that in spherical coordinate system the direction of the half spaces which specifies the activeness of the Relus in stable with respect to small changes in the angles, hence the optimization is supposed to be easier. The perform some experiments to see the advantage of using this coordinate system for optimization in practice. Strengths: This work goes against the conventional practive in deep learning which is running first order method in the standard coordinate system, rather it change to spherical coordinates. They further discuss some advantages of this system for stability of the stability of the activation regions. They conduct various experiments to show this advantage empirically. I think this is an interesting idea; the contribution of the current paper should be assessed in their experiments section, where they show the advantage of this change of coordinates in training. However in the theoretical part there are major concerns that I will mention in the following. Weaknesses: First the proof of Theorem 3.7. is wrong; I think what the authors meant in the argument of the Theorem is $\|u(\theta) - u(\theta + \epsilon)\|$ rather than their dot product (the dot product claim is obviously wrong since if epsilon goes to zero right hand side goes to zero while left hand side is constant). even with $\|u(\theta) - u(\theta + \epsilon)\|$ instead, the proof in appendix is wrong; the correct formula of length given metric $M_\theta$ on a manifold is: \|u(\theta) - u(\theta + \epsilon)\| = \int_{0}^1 \sqrt{\epsilon^\top M_{\theta + t\epsilon} \epsilon}. The proof can be corrected by estimataing $M_{\theta + t\epsilon}$ with $M_\theta$. Though the final argument is expected to be true, perhaps with different constants. Second, the paper is highly overclaimed about tjhe theoretical results. The theoretical claim of the paper is only that the normal vector of half space change smoothly if one parameterize in spherical coordinates despite normal coordinates, independent of the training algorithm, while the authors are claiming at the beginning that they analyze the behavior of training algorithms in this case and analyze its advantage over normal coordinates which is not the case. minor comments: Line 155 -> typo IMH instead of IMN Figure 2, (d), ..., (g): how is the fact that activations are more spread out related to the smooth evolution of spatial locations? (the plot with the yellow lines), and these are in 1 and 2D as far as I see, do you have a version of these plots in higher dim? (since that is the authors claim) Technical Quality: 1 Clarity: 2 Questions for Authors: -optimizing in the normal coordinate system with relu activation is known to create sparse features, which can help with generalization, and I am not sure if happens if one optimizes in spherical coordinates. Does the authors see sparsity in their training as well, and if not how do they assess this fact with their claim that spherical coordinate has better generalization? -In particular authors show in experiments that they can pick a larger learning rate in spherical coordinates, but this fact solely cannot be an indicator of better generalization (even if two algorithms A and B run in the same coordinate system and A picks larger learning rate than B it is still not clear which one generalize better). Can the authors further elaborate on their claim about superiority of spherical coordinates for the generalization of the final network? Confidence: 3 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: No theoretical justification for superiority of their method for optimization/generalization. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We believe that your major concern regarding Theorem 3.7 is due to a misunderstanding of our notation. Below, we first address your major concern and then respond to your other comments point by point. > Clarification of the argument of Theorem 3.7 We believe that there is a misunderstanding of our notation. We would like to clarify that the argument of Theorem 3.7 is regarding the **angle** between the two unit vectors $\mathbf{u}(\boldsymbol{\theta})$ and $\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})$, rather than the dot product or distance between those two vectors. Therefore, what Theorem 3.7 states is that the angle between $\mathbf{u}(\boldsymbol{\theta})$ and $\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})$ is bounded by $\lVert\boldsymbol{\varepsilon}\rVert_2$, which goes to zero as $\boldsymbol{\varepsilon}\to 0$. Appendix B along with our response to Reviewer D8e2 provides a detailed proof using techniques from differential geometry, which shows that Theorem 3.7 is indeed correct. We regret that our notation $\left<\mathbf{u}(\boldsymbol{\theta}),\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})\right>$ could be misleading as we realized that it can be used to denote dot product in some cases. We thank the reviewer for pointing this out. In the camera-ready version, we will explicitly write down the definition of our notation as follows to avoid any confusion or ambiguity: $$\left<\mathbf{u}(\boldsymbol{\theta}),\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})\right>\equiv\text{arccos}(\mathbf{u}(\boldsymbol{\theta})^\text{T}\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon}))=\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\left(\prod_{j=1}^{i-1}\sin^2(\theta_j)\right)\varepsilon_i^2}\leq\lVert\boldsymbol{\varepsilon}\rVert_2$$ > Clarification of claims about theoretical results Below, we clarify the contributions of our work and our corresponding claims. 1. The first contribution of this work is a novel characteristic activation analysis for ReLU networks, which reveals that traditional NN parameterizations and normalizations in the Cartesian coordinate destablize the evolution of the characteristic boundaries during training. This contribution correpsonds to our claim about analyzing the behavior of traditional NN parameterizations and normalizations during training. We do not claim analyzing NN training algorithms. 2. The second contribution of this work is the novel geometric parameterization (GmP), which makes a change of coordinate to train NNs in the hyperspherical coordinate. We provide both theoretical guarantees and empirical results to showcase the advantage of GmP in terms of stability, convergence speed, and generalization. This contribution corresponds to our claim about the advantage of training NNs in the hyperspherical coordinate. We will make our claims more precise as above in the camera-ready version. > How is the fact that activations are more spread out related to the smooth evolution of spatial locations? The more spread-out activations of GmP is one of the positive consequences of smooth evolution of the spatial locations. Figures 2g shows that smooth evolution of spatial locations in GmP allows small and consistent changes to be accumulated, resulting in more spread-out activations eventually. In contrast, Figure 2c shows that the change of spatial locations under other parameterizations can be up to $2^{16}$. Such abrupt and huge changes of spatial locations make the evolution of the spatial locations inconsistent. Consequently, it is much harder for optimizers to allocate those activations to the suitable locations during training. In addition, we would like to point out that some of the activations are even allocated to the regions that are far away from the data region and cannot be seen in Figure 2d-2f, and those activations become completely useless. We will add a discussion on this in the camera-ready version. > Do you have a version of these plots in higher dim? It is very hard to visualize the trajectories of characteristic points/boundaries in more than 2D. However, our Theorem 3.7 guarantees smooth evoluation under GmP in any dimensions in theory. We use the plots in 1D and 2D act as proofs of concepts in practice, since those are the only cases that can be visualized. > Do the authors see sparsity in their training as well? Thanks for this insightful question! Yes, empirically, we find that GmP also creates sparse features. This can be explained by Equation 11, which shows that GmP can easily create sparse features due to the product structure. We conjecture that this is one of the reasons why GmP provides better generalization and are currently working on a follow-up project to investigate this sparsity behavior and its connection to generalization. We will add a discussion on this in the camera-ready version. > A larger learning rate in spherical coordinates solely cannot be an indicator of better generalization. Can the authors further elaborate on their claim about superiority of spherical coordinates for the generalization of the final network? We do not claim a larger learning rate is an indicator of better generalization. What we claim is that a larger learning rate makes convergence faster, as shown in Figure 4. Our claim about GmP's better generalization is empirically supported by GmP's better test performance on a variety of ML tasks with different NN architectures. As mentioned above, we are currently investigating why GmP results in better generalization from a theoretical perspective as a follow-up work (e.g., sparsity and large width limiting behavior like NTK and $\mu$P). We will include this clarification in the camera-ready version. We thank the reviewer again for their insightful questions and for helping us improve this work. We hope that our response has resolved their concerns and that the reviewer will reconsider their rating. --- Rebuttal 2: Title: Thank you for your feedback! Please consider our rebuttal and let us know if your concerns have been addressed. Comment: We thank the reviewer once again for their effort in the reviewing process. As there are only a few working days left in the discussion period, we would like to ask if our response has addressed the reviewer’s concerns. If so, we kindly invite the reviewer to consider raising their rating. If any concerns remain, we are happy to discuss and clarify them further here. --- Rebuttal 3: Title: Response Comment: Thank you for your additional response. The proof of Theorem 3.7 is still wrong, do you have a reference for the equation you are claiming in your new response? like I said before the correct formula which I think the authors are interested in using here is the definition of length on manifold which is given by $d(u(\theta), u(\theta + \epsilon)) = \int_{0}^1 \sqrt{\epsilon^\top M_{\theta + t\epsilon} \epsilon}.$ Regarding your response to clarify the contributions in your response above, for the first point, I don't understand the meaning of your sentence "analyzing the behavior of traditional NN parameterizations and normalizations during training." where do you exactly analyze the behavior of Relu neural networks during training? other than that, I acknowledge your argument regarding instability of relu in the normal coordinates, which is INDEPENDENT of the training algorithm. Regarding your second point of contribution, see my response in the above paragraph. Also again I don't understand what you mean by including "convergence speed, and generalization" analysis as a part of your theoretical contribution. Where do you exactly show theoretical arguments about "convergence speed" or "generalization" of training algorithms? --- Rebuttal Comment 3.1: Title: Has our latest response addressed your remaining concerns? Comment: We thank the reviewer once again for their effort in the reviewing process. As there are only a few hours left in the discussion period, we would like to ask if **our latest response regarding the calculation of length on manifold** has addressed the reviewer’s concerns. If so, we kindly invite the reviewer to consider raising their rating. If any concerns remain, we are happy to further discuss and clarify them here. --- Rebuttal 4: Title: Response to the addition comments from the reviewer Comment: Thank you for your response to our rebuttal. We believe that there is still a misunderstanding. Below, we respond to your new comments point by point and hope that this will sufficiently address your remaining concerns. > ...do you have a reference for the equation you are claiming in your new response? The first part of the equation that we are claiming in our new response is $$\left<\mathbf{u}(\boldsymbol{\theta}),\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})\right>\equiv\text{arccos}(\mathbf{u}(\boldsymbol{\theta})^\text{T}\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon}))$$ This means that the angle between two unit vectors is given by the arc cosine of the dot product between the two vectors. This is a basic definition from geometry: since the dot product between two vectors x and y is defined as $$x^Ty=||x||||y||\cos\left<x,y\right>$$ it follows that the angle between those two vectors is given by $$\left<x,y\right>\equiv\text{arccos}\left(\frac{x^Ty}{||x||||y||}\right)$$ In our case, since we are dealing with unit vectors whose norms are one, the denominator becomes one. This shows why the first part is right. The second part of the equation that we are claiming in our new response is $$\text{arccos}(\mathbf{u}(\boldsymbol{\theta})^\text{T}\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon}))=\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\left(\prod_{j=1}^{i-1}\sin^2(\theta_j)\right)\varepsilon_i^2}\leq\lVert\boldsymbol{\varepsilon}\rVert_2$$ This means that the change in the angles between the two unit vectors goes to zero as epsilon goes to zero. The complete proof of this statement can be found in the Appendix B of our paper. Additional details can be found in our response to Reviewer D8e2, which will be included in the final version of the paper. This is the novel part and is one of the main theoretical contributions of our paper. > ...the correct formula which I think the authors are interested in using here is the definition of length on manifold In our latest response below, we have shown that even if under the definition of the length on manifold mentioned by the reviewer, our statement would still hold true. In our paper, we showed a differential version of this statement rather than the integral version suggested by the reviewer. We thank the reviewer for their suggestions and will add this result regarding the length on manifold under Theorem 3.7 in our paper as a corollary. First, note that we have shown in the proof of Theorem in Appendix B that $$\sqrt{\epsilon^T M_{\theta+t\epsilon}\epsilon}=\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\left(\prod_{j=1}^{i-1}\sin^2(\theta_j+t\epsilon)\right)\varepsilon_i^2}\leq\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\varepsilon_i^2}=\lVert\boldsymbol{\varepsilon}\rVert_2$$ Then, using the definition of length on the manifold provided by the reviewer, we have $$d(u(\theta),u(\theta+\epsilon))\equiv \int_0^1 \sqrt{\epsilon^T M_{\theta+t\epsilon}\epsilon}dt\leq\int_0^1 \lVert\boldsymbol{\varepsilon}\rVert_2 dt = \lVert\boldsymbol{\varepsilon}\rVert_2.$$ Therefore, this distance on the manifold would also go to zero as epsilon goes to zero. > ...where do you exactly analyze the behavior of Relu neural networks during training? Sorry for the confusion. In our paper, we do not claim to analyze the behavior of ReLU networks during training. Our claim in the paper is *"we analyze the evolution dynamics of the characteristic activation boundaries in ReLU networks."*, which can be found in Line 22 of our paper. Nevertheless, we thank the reviewer for helping us making our claims more precise and will make sure that this claim is consistently and clearly stated in the final version of the paper. > ...what you mean by including "convergence speed, and generalization" analysis as a part of your theoretical contribution. Sorry again for the confusion, but we do not include convergence speed and generalization as part of our theoretical contribution. In our abstract and introduction section, we only claim *"We show theoretically that GmP resolves the aforementioned instability issue."* (Lines 8-9) and *"Our theoretical results show that GmP stabilizes the evolution of characteristic activation boundaries"* (Lines 31-32). As promised in our rebuttal, we will make our claims more precise to clearly state that our theoretical contribution only includes the identification and solution to the instability issue, and the convergence speed and generalization are only supported by empirical evidence. We thank the reviewer for helping us making our claims more precise. We hope that our response has addressed all your concerns and that the reviewer will reconsider their rating. --- Rebuttal 5: Comment: We would like to add that even if under the definition of length on the manifold mentioned by the reviewer, our argument would still hold true. We demonstrate this below. First, note that we have shown in the proof of Theorem 3.7 in Appendix B that $$\sqrt{\epsilon^T M_{\theta+t\epsilon}\epsilon}=\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\left(\prod_{j=1}^{i-1}\sin^2(\theta_j+t\epsilon)\right)\varepsilon_i^2}\leq\sqrt{\varepsilon_1^2+\sum_{i=2}^{n-1}\varepsilon_i^2}=\lVert\boldsymbol{\varepsilon}\rVert_2$$ Then, using the definition of length on the manifold provided by the reviewer, we have $$d(u(\theta),u(\theta+\epsilon))\equiv \int_0^1 \sqrt{\epsilon^T M_{\theta+t\epsilon}\epsilon}dt\leq\int_0^1 \lVert\boldsymbol{\varepsilon}\rVert_2 dt = \lVert\boldsymbol{\varepsilon}\rVert_2.$$ Therefore, this distance/length on the manifold would go to zero as epsilon goes to zero. Essentially, Theorem 3.7 is a differential form of the statement, whereas this is an integral form of the statement. We thank the reviewer for this suggestion and will add a corollary for this results under Theorem 3.7 in our paper. Title: Additional theoretical results regarding the calculation of the length on the manifold requested by the reviewer
Summary: In standard neural networks each neuron, and activation function g performs the following operation on the input $x\in\mathbb{R}^{n}$: $$z=g\left(w^{t}x+b\right)$$ When usually g in the ReLU activation. In this work the authors identify a critical instability in many common NN settings, which theoretically destabilizes the evolution of the characteristic boundaries, and empirically impedes fast convergence and hurts generalization performance. To address this, they introduce a novel NN parameterization, named Geometric Parameterization (GmP) which operates in the hyper spherical coordinate. The authors show that this parameterization, both theoretically and empirically, improves optimization stability, convergence speed, and generalization performance. Strengths: The main strength of this paper (besides the actual methods introduced) is that the results are clear and concise. As a reader, it is relatively easy for me, based on the empirical results, to understand how these methods perform compared to other common methods. I will now explain each dimension. **Originality**: I think that this paper introduces novel ideas into neural networks (NN). The use of a different coordinate system is not something that I have seen before, and it opens the door to many interesting new approaches to NN. **Quality**: There is not much to say here, as I feel this is a quality paper. All claims and lemmas are proved as needed. **Clarity**: The clarity of this paper is good. This mainly stems from the fact that the paper separates the explanation of the problem from the explanation of the solution. Both parts are described using theoretical explanations and empirical/graphical examples. **Significance**: It is hard to estimate significance, as usually only time will tell how significant the research is, but I feel that this paper will be significant. I believe that the main significance lies not necessarily in the methods introduced, but rather in the idea behind them. The idea of a different perspective on the learned parameters has the potential to lead to more unique and potentially beneficial views. Weaknesses: I will start with a somewhat nit-picking weakness. The paper uses $\mathcal{B}$ and ϕ in two different contexts. ϕ is used in Definition 2.2 as a vector and in Definition 3.1 as a function. $\mathcal{B}$ is used in Definition 2.2 as a hyperplane and in Definition 3.2 as a function. This isn't accidental, as there is a strong connection between the definitions, but I believe that using slight differences in the notations would be better. Another weakness is that the article hasn't provided an empirical case where GmP isn't the best option. It is possible that GmP is always the best choice (and if so, great), but usually, there are cases where new techniques aren't the best options. It would have been interesting to see a scenario where GmP isn't the best option and (perhaps under future work) understand why. Technical Quality: 4 Clarity: 3 Questions for Authors: I have few technical questions regarding section B – proof of theorem 3.7: 1. It is not clear to me why equation 16 is true. I would suggest adding an explanation or a reference to one. 2. The analyze of the case where a≠b. It isn't immediately clear from equation 18 why " sum of terms that are either zero or with alternating signs". Maybe it is simple calculus, but I suggest adding additional step to show this claim. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, the authors do address the limitations. The main limitation is the input mean normalization that being address in appendix E. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our work! We are encouraged by your positive comments on the originality, quality, clarity, and significance of our paper. Below, we address your concerns point by point. > Explain the notations $\mathcal{B}$ and $\boldsymbol{\phi}$ in different contexts. We would like to clarify that $\boldsymbol{\phi}$ is a vector and $\mathcal{B}$ is a hyperplane throughout the paper. Taking $\phi$ as an example, the vector $\boldsymbol{\phi}$ in Definitions 2.1 and 3.1 refers to the same vector in different coordinate systems (i.e., Cartesian and hyperspherical coordinate systems, respectively). We write $\boldsymbol{\phi}$ as a function of $\lambda$ and $\boldsymbol{\theta}$ in Definition 3.1 because we want to emphasize that we have changed to a different coordinate system. In the camera-ready version, we will add a sentence to clarify that Definitions 2.1 and 3.1 define the same vector $\boldsymbol{\phi}$ under different coordinate systems, and we will explicitly write $\boldsymbol{\phi}(\mathbf{w},b)$ as a function of $\mathbf{w}$ and $b$ in Definition 2.1 in order to make the contrast between the two definitions clearer. We will also similarly clarify $\mathcal{B}$ in the camera-ready version. > Discuss empirical cases where GmP isn't the best option and (perhaps under future work) understand why. We find that GmP is almost always the best option for MLPs in all cases we have seen. For CNNs, we observe that the benefit of GmP becomes less significant as the network gets more and more overparameterized. We suspect that GmP may not be the best option when the network gets extremely overparameterized. We agree with the reviewer that it would be interesting to understand this behavior under future work, which we are currently working on. In addition, as discussed in Appendix E, it would also be interesting to see whether GmP is the best option for Transformers under future work. > Explain why Equation 16 is true. Firstly, note that arc length $=$ angle $\times$ radius. In our case, since the radius of a unit vector is one, the angle between the two unit vectors $\mathbf{u}(\boldsymbol{\theta})$ and $\mathbf{u}(\boldsymbol{\theta}+\boldsymbol{\varepsilon})$ is equal to the arc length $\delta s$ between the two points. By the generalized Pythagorean theorem, the arc length between two points with a small change $\delta\boldsymbol{\theta}$ is given by: $$\delta s=\sqrt{\sum_{i,j}m_{ij}\delta\theta_i\delta\theta_j}=\lVert \delta\boldsymbol{\theta}\rVert_M=\lVert \boldsymbol{\varepsilon}\rVert_M$$ In the camera-ready version, we will add this explanation to the proof of Theorem 3.7 in Appendix B. > Explain why the sum of terms in Equation 18 cancels out in the case where $a\not=b$. We first write Equation 17 in a more compact form: $$u_i(\boldsymbol{\theta})=\cos\theta_i\prod_{k=1}^{i-1}\sin\theta_k,\quad 0<i<n$$ $$u_n(\boldsymbol{\theta})=\prod_{k=1}^{n-1}\sin\theta_k$$ It can be seen that $\frac{\partial u_i}{\partial\theta_q}=0$ for $0<i<q$. For $q\leq i \leq n$, we have $$\frac{\partial u_i}{\partial\theta_q}=-\delta_{iq}\prod_{k=1}^{i}\sin\theta_k+\cos\theta_i\left(\prod_{k=1}^{i-1}\sin\theta_k\right)\left(\sum_{k=1}^{i-1}\frac{\delta_{kq}\cos\theta_k}{\sin\theta_k}\right),\quad q\leq i<n$$ $$\frac{\partial u_n}{\partial\theta_q}=\frac{\cos\theta_q}{\sin\theta_q}\prod_{k=1}^{n-1}\sin\theta_k$$ Without loss of generality, we assume $a<b$. From Equation 18, we have $$m_{ab}=\sum_{i=1}^n\frac{\partial u_i}{\partial\theta_a}\frac{\partial u_i}{\partial\theta_b}=\sum_{i=b}^n\frac{\partial u_i}{\partial\theta_a}\frac{\partial u_i}{\partial\theta_b}=\sum_{i=b}^{n-1}\cos\theta_i\left(\prod_{k=1}^{i-1}\sin\theta_k\right)\left(\sum_{k=1}^{i-1}\frac{\delta_{ka}\cos\theta_k}{\sin\theta_k}\right)\frac{\partial u_i}{\partial\theta_b}+\frac{\cos\theta_a\cos\theta_b}{\sin\theta_a\sin\theta_b}\prod_{k=1}^{n-1}\sin^2\theta_k$$ $$=\frac{\cos\theta_a}{\sin\theta_a}\left(\sum_{i=b}^{n-1}\cos\theta_i\left(\prod_{k=1}^{i-1}\sin\theta_k\right)\frac{\partial u_i}{\partial\theta_b}+\frac{\cos\theta_b}{\sin\theta_b}\prod_{k=1}^{n-1}\sin^2\theta_k\right)$$ $$=\frac{\cos\theta_a}{\sin\theta_a}\left(-\sin\theta_b\cos\theta_b\prod_{k=1}^{b-1}\sin^2\theta_k+\frac{\cos\theta_b}{\sin\theta_b}\sum_{i=b+1}^{n-1}\cos^2\theta_i\left(\prod_{k=1}^{i-1}\sin^2\theta_k\right)+\frac{\cos\theta_b}{\sin\theta_b}\prod_{k=1}^{n-1}\sin^2\theta_k\right)$$ $$=\frac{\cos\theta_a\cos\theta_b}{\sin\theta_a\sin\theta_b}\left(-\prod_{k=1}^b\sin^2\theta_k+\sum_{i=b+1}^{n-1}\cos^2\theta_i\left(\prod_{k=1}^{i-1}\sin^2\theta_k\right)+\prod_{k=1}^{n-1}\sin^2\theta_k\right)$$ On the other hand, by recursively collecting like terms and applying $\sin^2\theta_q+\cos^2\theta_q=1$, we have $$\sum_{i=b+1}^{n-1}\cos^2\theta_i\left(\prod_{k=1}^{i-1}\sin^2\theta_k\right)+\prod_{k=1}^{n-1}\sin^2\theta_k=\sum_{i=b+1}^{n-2}\cos^2\theta_i\left(\prod_{k=1}^{i-1}\sin^2\theta_k\right)+\prod_{k=1}^{n-2}\sin^2\theta_k$$ $$=\sum_{i=b+1}^{n-3}\cos^2\theta_i\left(\prod_{k=1}^{i-1}\sin^2\theta_k\right)+\prod_{k=1}^{n-3}\sin^2\theta_k=\cdots=\prod_{k=1}^b\sin^2\theta_k$$ This shows that $m_{ab}=0$ for $a\not=b$. While this derivation only involves collecting like terms, we realized that it might not be that straightforward as it involves many steps. We thank the reviewer for pointing this out. In the camera-ready version, we will add this derivation to the proof of Theorem 3.7 in Appendix B. We hope that this has sufficiently addressed all your concerns. We thank the reviewer again for their insightful questions and for helping us improve this work. --- Rebuttal 2: Comment: Thank you for the detailed response. As a result I increased my rating by 1
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable and insightful comments on our paper and for helping us improve this work. We appreciate that three reviewers support the acceptance of our paper and highlight the novelty, quality, clarity and significance of our proposed characteristic activation analysis and geometric parameterization. We have addressed each reviewer's concerns separately below their respective review. In particular, we believe that Reviewer ywDS's major concern regarding Theorem 3.7 is due to a misunderstanding of our notation, which we have clarified in our rebuttal to their review. We hope that our responses have sufficiently addressed all reviewers' concerns.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation
Accept (poster)
Summary: This paper proposes a novel architecture for denoising diffusion probabilistic models (DDPMs) that enables simultaneous unsupervised image generation and segmentation. The key idea is to constrain the denoising network with a structured bottleneck that factorizes the image into regions, which are then denoised in parallel. This architectural design encourages the model to learn meaningful segmentations as a byproduct of training for image generation. The authors demonstrate that their approach can generate high-quality images along with corresponding segmentations, as well as segment real images, without any supervision or additional loss terms. Experiments on multiple datasets show improvements in both image quality and segmentation accuracy compared to baselines. Strengths: * Novel architectural design that unifies unsupervised image generation and segmentation in a principled way * Achieves strong results on both tasks without requiring additional loss terms or supervision * Provides insights into how the model learns to factorize images into regions * Explores extensions to hierarchical segmentations Weaknesses: * Limited theoretical analysis: The paper lacks a rigorous theoretical foundation for why the proposed architecture leads to meaningful segmentations. While empirical results are strong, a deeper analysis of why factorizing the denoising process encourages semantic segmentation would strengthen the contribution. For instance, the authors could explore connections to information bottleneck principles or analyze the gradients flowing through different parts of the network. * Scope of experiments: The experiments primarily focus on datasets with relatively simple segmentations (2-3 regions), such as Flower, CUB, and face datasets. While the ImageNet results are promising, they are limited in scope. The paper lacks experiments on more challenging datasets with complex, multi-object scenes (e.g., COCO, Cityscapes) that would better demonstrate the method's scalability and generalizability. * Incomplete comparisons: The paper misses comparisons with some recent, relevant unsupervised segmentation methods, such as PiCIE (CVPR 2021) or STEGO (ICCV 2021). Including these would provide a more comprehensive evaluation of the state-of-the-art. For the image generation task, comparisons with other recent diffusion model variants (e.g., latent diffusion models) would be valuable to contextualize the improvements. * Architectural choices and ablations: The paper does not extensively explore variations in the factorized architecture. For instance, how does performance change with different numbers of parallel decoders or alternative ways of combining their outputs? More comprehensive ablation studies would help isolate the impact of different components of the proposed architecture. Technical Quality: 3 Clarity: 3 Questions for Authors: * The setting of generating both the images and the segmentation maps is interesting. However, Isn't this setup a little trival? Why we need to generate both contents at the same time? How could this feature help us? * How sensitive is the method to the choice of number of regions K? Is there a way to automatically determine the optimal K for a given dataset? * The paper mentions that the method can be extended to hierarchical segmentations. Could you provide more details on how this would work and what challenges might arise? * Have you explored applying this method to more complex segmentation tasks with many object categories? What modifications might be needed? * How computationally expensive is the proposed method compared to standard DDPMs, both for training and inference? * The paper claims the method can be applied to segment novel images with just one forward pass. How does the runtime compare to other unsupervised segmentation methods? * Some literature that share the similar insights of unifying generation and segmentation can be refered to: [1] Lai Z, Duan Y, Dai J, et al. Denoising diffusion semantic segmentation with mask prior modeling[J]. arXiv preprint arXiv:2306.01721, 2023. [2] Li C, Liu X, Li W, et al. U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation[J]. arXiv preprint arXiv:2406.02918, 2024. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work in Section 5, acknowledging that the current results are limited to 2-3 class segmentations and discussing the need for further work on handling more complex scenes. They also mention computational costs as a potential limitation. The paper does not directly discuss potential negative societal impacts, which could be briefly addressed, though the risks seem relatively low for this type of fundamental methodological work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To Reviewer FyWe **Q: Limited theoretical analysis.** A: Our system could potentially learn to assign the entire image to a single mask, leaving all other masks empty. In doing so, it would essentially fall back to being equivalent to a standard DDPM UNet architecture. All synthesis would occur as a result of a single run of the decoder. However, if the system learns to split the image into different regions, it gets to use multiple runs of the decoder in order to synthesize the denoised result. There is a computational advantage (using more decoder runs) that the system can leverage to better denoising, if it can learn how to partition the problem into parallel subtasks. This is the fundamental driver behind the denoising loss encouraging learning of region partitions. We agree that a quantitative analysis of these dynamics would be valuable. **Q: Scope of experiments and Incomplete comparisons; Have you explored applying this method to more complex segmentation tasks with many object categories?** A:**Regarding dataset selection**, our work is at the stage of a new architectural design for diffusion-based segmentation and generation, with 2 or 3 class segmentation results demonstrating the improvement across multiple datasets, scaling up to ImageNet. Training unconditioned DDPM from scratch for complex domains, e.g., COCO or PASCAL, is challenging, even for the single goal of high-quality image generation. Simultaneous generation and segmentation introduces another level of difficulty. Our work proposes and validates new ideas on object-centric datasets. Subsequent efforts, and more computational resources, will be needed to scale up our techniques to larger and more complex datasets. Moreover, our work is itself on the path of scaling the segmentation capabilities of DDPMs in comparison to prior work. Consider the evolution from: DatasetDDPM (ICLR'2022) introduced the idea of using a **pre-trained** DDPM to extract segmentations in a **supervised** manner, validated on the datasets up to the scale of **CelebA**, to: Ours (submission'2024) trains a factorized diffusion model **from scratch** to accomplish both segmentation and generation **simultaneously** in an **unsupervised** manner, validated on the larger scale ImageNet dataset with **more complex scenes**. **Regarding baseline selection**, our goal is to perform both unsupervised segmentation and image generation in a unified framework. This is an entirely novel combination of capabilities; our work is an initial proof-of-concept, not an attempt to achieve state-of-the-art unsupervised segmentation results. The relevant competing baselines mainly lie in the scope of generative models for unsupervised segmentation. Our approach is able to achieve better unsupervised segmentation results across multiple datasets. We can also achieve improved generation quality (over standard DDPM) as shown in Experimental Section 4.1. Simply putting the state-of-the-art unsupervised segmentation methods in comparison with our approach ignores the more challenging setting in which we operate (requiring that we are also able to generate images), as well as the fact that our system improves generation quality. **Q: Architectural choices and ablations.** A: We do have a more systemic investigation in different architectural choices in Appendix Section 3: reorganizing our architectural design to support hierarchical mask factorization in place of a flat set of $K$ regions. Additionally, during the rebuttal phase, we enlarge the flat set of $K$ regions to 5 for CLEVR dataset, as shown in Figure 2 of the rebuttal PDF. We also show the results with K=2 were found to be less satisfactory than K=3 for both segmentation and generation in Table 1 (rebuttal PDF). **Q: Why need to generate both contents at the same time?** A: We only train the model once and achieve both targets. Doing both together actually improves generation quality, while yielding the ability to segment as a bonus. Alternatively, through the lens of a traditional computer vision task, we learn to segment in an unsupervised manner and get the ability to generate as a bonus. **Q: How sensitive to the choice K?** A: We are defining the maximum number of regions the model can use for factorization. There is no requirement that it utilize all of these mask components. For example, it could learn to only use two mask channels out of three. Or it could predict every pixel as belonging to the same mask channel, thereby collapsing back to the vanilla UNet as a base case. The fact that the model prefers to learn something nontrivial is related to the actual structure of the data; nothing forces it to use all K channels. **Q: More details on hierarchy segmentation.** A: We have detailed the investigation in hierarchy segmentation in Appendix A.3. We formulate a hierarchical factorized diffusion architecture to progressively refine segmentation results from a coarse initial prediction to a fine, detailed final segmentation. Our initial investigation into hierarchical extensions suggests a promising future path towards handling complex scenes. **Q: How computationally expensive for training and inference, compared with DDPMs?** A: Given the nice property of weight sharing scheme in parallel decoding, the only additional weights are from our mask generator. We have an efficient batching implementation for decoding, which makes training and sampling speed comparable to standard DDPMs. As for segmentation, the inference speed is the same as a single forward pass in a standard DDPM. **Q: Runtime compared to other unsupervised segmentation methods?** A: If both use the same UNet encoder-decoder architecture, the inference complexity is the same. The parallel decoding scheme and diffusion process does not affect the efficiency of segmentation inference speed. **Q: Some literature can be referred to.** A: Thanks for the suggestion. We will include a discussion in the final version. --- Rebuttal Comment 1.1: Comment: > Q: Why need to generate both contents at the same time? >> A: We only train the model once and achieve both targets. Doing both together actually improves generation quality, while yielding the ability to segment as a bonus. Alternatively, through the lens of a traditional computer vision task, we learn to segment in an unsupervised manner and get the ability to generate as a bonus. Regarding this point, is there any evidence (theoretically or empirically) that could support this claim? It seems not that straightforward. Thank you! Best, --- Reply to Comment 1.1.1: Title: To Reviewer FyWe Comment: **Q: Is there any evidence (theoretically or empirically) that could support this claim [doing both together actually improves generation quality]?** A: Yes, Table 5 (top) gives empirical evidence for precisely this claim. Our system, which is a generation architecture that internally performs segmentation, generates higher quality images (lower FID) than a standard DDPM baseline. For each dataset in Table 5, we train from scratch each system (ours and the DDPM baseline) in a fully unsupervised manner driven by the denoising objective alone. Our system and the DDPM baseline have comparable design motifs and similar parameter counts, with the distinction that, as described in Sections 3.1 and 3.2, ours adds a mask generator module and partitions synthesis into parallel decoding pathways. This structural change to the architecture significantly improves FID across all datasets (e.g., 13.35 to 10.79 for FFHQ-128 and 7.02 to 6.54 for ImageNet-64). Thus, our system is a denoising diffusion model with **improved generation quality** over a baseline DDPM, and is trained in the exact same *fully unsupervised manner*. If someone were only interested in high quality generation, our system would be preferable to the standard DDPM. Beyond this, our system **produces segmentation as a bonus**; segmentation can be read from the internal bottleneck state (region masks) of our architecture. Alternatively, someone interested in learning to segment in an unsupervised manner could view the image generation capability of our system as a bonus. As shown in Figure 1 (a)(b), we only train our model once and achieve both targets (image generation, segmentation). The fact that our architecture improves generation quality makes it a candidate to serve as the basis for future diffusion-based foundation models. Viewed in a broader context, it is perhaps not too surprising that network architecture design can have a strong influence on learning and on prediction or output quality. Convolutional neural networks and attention mechanisms in transformers are two examples of imbuing neural architectures with domain-relevant structure. We are imbuing domain-relevant structure at a more macro scale, in the form of parallel synthesis pathways that are a match to a compositional model of images.
Summary: The authors propose a model for simultaneous unsupervised semantic segmentation and image generation based on denoising diffusion probabilistic models. The methodology alters the architecture of a typical DDPM by conditioning the decoder on the outputs of a mask generator. Strengths: In general, the paper is well-written and technically sound. The experimental results surpass the state-of-the-art on the datasets they were shown and the results seem promising. Weaknesses: The abstract lacks context/motivation. The conclusions and experiments lack discussions of the limitations of the proposed approach. The authors propose a simple solution to a complex problem (unsupervised segmentation) through altering the architecture of a DDPM by adding an additional decoder to generate masks in an unsupervised manner and conditioning the DDPM’s original decoder on the outputs of the mask generator. However, the explanation of the model is unclear: Initially, 𝑚𝑘 is introduced as the output of the mask generator after applying the softmax activation function, for each of the k channels. As such, 𝑚𝑘 should have dimensions width x height x 1, where width and height are the dimensions of the original images. Then, it is stated that 𝑚𝑘 is concatenated with ℎenc (constituted by several outputs of the encoder provided to the decoder as skip connections) and ℎ𝑚𝑖d (latent vector) and provided as input. It is unclear how these variables of different dimensions are concatenated. In the manuscript, it says “We downsample 𝑚 accordingly to the same resolution as ℎ𝑚𝑖d and ℎenc at different stages” (line 157), however it is still unclear how exactly this downsampling step is achieved. Does 𝑚 represent the outputs of each layer of the mask generator and is it provided as input to the decoder as skip connections? If so, how is 𝑚𝑘 obtained from 𝑚? Do the outputs of all layers of the mask generator possess exactly 𝑘 channels? An example highlighting the dimensions of 𝑚𝑘, ℎ𝑚𝑖d and ℎenc and how these are concatenated would be helpful. The experiments are a limited when it comes to the datasets used to evaluate the model for semantic segmentation. The only two datasets used for over 2 classes (distinguishing more than background and foreground) were face datasets, where all images are focused on the face and present relatively low variability. It is unclear how the method would work with over 3 classes and on datasets with higher variability such that not all objects are present in all images. Furthermore, even when only two classes are used, to distinguish background from foreground, the datasets seem to be limited, with low variability in backgrounds, making it difficult to assess the model’s results. The results on the CUB dataset, whose backgrounds present more variability and the regions with objects of interest (birds) are smaller, show difficulties segmenting the birds from the backgrounds, especially in the presence of branches (Fig 8). In the ImageNet experiment, it would also be interesting to see what would happen if 5 classes (4 objects + background) were used (k=5 in the softmax layer) rather than just two. Would the network be able to learn to distinguish certain objects or would it still only be able to separate background from foreground? As it stands, it becomes difficult to assess the proposed network’s strengths and whether it is truly capable of capturing semantic information or whether it just separates background from foreground without requiring more in-depth semantic information about the nature of the objects. The paper lacks a discussion of the limitation of the methods. Technical Quality: 2 Clarity: 3 Questions for Authors: see above the comments about the concatenation of ℎ𝑚𝑖d and ℎenc. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To Reviewer vzUs **Q: The abstract lacks context/motivation. The conclusions and experiments lack discussions of the limitations of the proposed approach.** A: In the current abstract, we provide a clear introduction to the challenges and motivations driving our research. The abstract begins by discussing the limitations of supervised deep learning, specifically the reliance on large amounts of annotated data, which is a well-known barrier in computer vision tasks. We explicitly mention the goal of achieving unsupervised image generation and segmentation, addressing the significant problem of high annotation costs in supervised learning. The abstract succinctly outlines our approach—a denoising diffusion model with a computational bottleneck—which is a novel strategy for tackling the dual tasks of generation and segmentation simultaneously. It concludes by highlighting the success of our model in achieving high-quality image synthesis and segmentation across multiple datasets, reinforcing the effectiveness of our approach. The conclusion and experiment sections of the paper do acknowledge and discuss limitations, which are crucial for a balanced understanding of the proposed method. The conclusion acknowledges that the work is at an early stage of architectural design, focusing on simpler class scenarios to establish a proof of concept before tackling more complex tasks. We discuss that the experiments are performed primarily on two to three class scenarios, recognizing these as initial limitations with plans to extend the approach to higher resolutions and more complex segmentation tasks in future work. The experiments highlight the need for further exploration of scalability and generalization to more diverse datasets and tasks. This is particularly important for understanding how the model performs in real-world applications beyond the current scope. The conclusion outlines future research directions aimed at addressing these limitations, such as investigating hierarchical extensions and handling more complex scenes. **Q: The explanation of the model is unclear .** A: This question is about Eq.(5), which is not our proposed design. In line 158-160, "However, such a design significantly modifies (e.g., channel sizes) the original U-Net decoder architecture. Moreover, conditioning with the whole mask representation may also result in a trivial solution that simply ignores region masks." As such, there's no concatenation of mask and $h_enc$ in our proposed parallel decoding scheme. Refer to Eq.(6) and Eq. (7) for the description of our design. We again clarify the design as depicted in Figure 2: $h_{mid}$ (latent features) is directly passed to the decoders following conventional DDPM without concatenation. $m_k$ denotes the $k-th$ channel of the mask generator **final** output $m$. The mask $m_k$ is used at each decoding layer as part of skip connections. It provides region-specific information that helps the decoder refine outputs by distinguishing between different semantic regions. As shown in Figure 2, we adopt **channel-wise masking**: Each channel of $m$, denoted as $m_k$, is applied to $h_{enc}$ through **element-wise multiplication**. We downsample $m_k$ using bilinear interpolation to align its spatial dimensions with $h_{enc}$ at different stages. Only element-wise multiplication is used with consistent spatial dimensions, allowing integration across the network's architecture without significantly changing the channel dimension of conventional DDPM. Q: **The experiments are a limited when it comes to the datasets used to evaluate the model for semantic segmentation.** A: Training unconditioned DDPM from scratch for complex domains, e.g., COCO or PASCAL, is challenging, even for the single goal of high-quality image generation. Simultaneous generation and segmentation introduces another level of difficulty. Our work proposes and validates new ideas on object-centric datasets. Subsequent efforts, and more computational resources, will be needed to scale up our techniques to larger and more complex datasets. Moreover, our work is itself on the path of scaling the segmentation capabilities of DDPMs in comparison to prior work. Consider the evolution from: DatasetDDPM (ICLR'2022) introduced the idea of using a **pre-trained** DDPM to extract segmentations in a **supervised** manner, validated on the datasets up to the scale of **CelebA**, to: Ours (submission'2024) trains a factorized diffusion model **from scratch** to accomplish both segmentation and generation **simultaneously** in an **unsupervised** manner, validated on the larger scale ImageNet dataset with **more complex scenes**. As for the segmentation results, we provide a supervised result as a reference point against which to compare unsupervised methods using the same architecture. One would not expect any unsupervised method, limited to training on the same data, to be able to match the supervised method's performance. Our method, which is unsupervised, outperforms the unsupervised baseline (DatasetDDPM-unsup) based on the same UNet architecture. It is the case that we want to explore much larger K for ImageNet in the future, as doing so will be necessary for segmenting complex scenes. A challenge is implementing K parallel pathways efficiently for large K; note that training is already expensive as everything takes place within a diffusion model.
Summary: The paper introduces a novel neural network architecture capable of simultaneous image generation and segmentation in an unsupervised manner, eliminating the need for pre-labeled data. The core concept involves training the network to dissect an image, clean individual sections, and reassemble them. Notably, the system can decipher the image content solely through the analysis of these sections. The authors demonstrate the effectiveness of their unsupervised model for real-image segmentation. This work presents a promising approach for generating realistic images and understanding their content with minimal reliance on labeled data. Strengths: - Clear and concise presentation - Experimental results validate the proposed method's effectiveness Weaknesses: - Limited discussion of related work: The paper lacks a comprehensive review of relevant research, hindering the assessment of genuine novelty. It's crucial to mention existing works like DatasetGAN (utilizing inner features of trained GANs for segmentation), Diffuse, Attend, and Segment (high-quality zero-shot segmentation with attention maps), DINO, and FeatUP (representation utilization in generative models for downstream tasks). Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the proposed method differ from the mentioned related works? What are the key advantages it offers? - Are the results quantitatively compared with existing segmentation methods (e.g., DatasetGAN, etc.)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The ethical implications of using generative models to create harmful content should be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To Reviewer FkB4 **Q: Limited discussion of related work; How does the proposed method differ from the mentioned related works? What are the key advantages it offers?** A: Thanks for the suggestion. We will incorporate this discussion into our related work section to provide a clear picture of how our approach diverges from existing methods: - DatasetGAN has utilized intermediate GAN features to perform segmentation, demonstrating the potential of leveraging generative models for downstream tasks. However, our approach differs by directly integrating segmentation within the diffusion process, eliminating the need for separate feature extraction and potentially offering more coherent results. - Similarly, Diffuse, Attend, and Segment employs attention mechanisms for zero-shot segmentation, providing high-quality segmentation through attention-driven methods. Our factorized diffusion architecture offers an alternative by producing segmentation masks as an intrinsic part of the denoising process, potentially yielding more integrated and coherent segmentation results. - In the realm of self-supervised learning, DINO showcases the power of representation learning through knowledge distillation, applicable to tasks like segmentation. Our approach aligns with this goal of versatile representation learning but achieves segmentation directly within the generative model, eliminating the need for additional supervision or distillation processes. - FeatUP highlights the adaptability of generative model representations for downstream tasks. While FeatUP focuses on broad applicability, our method targets segmentation with a specifically designed architecture, achieving simultaneous image generation and segmentation in a unified framework. In comparison with prior work, we are offering an entirely new approach to solving an end task (segmentation) using unsupervised generative learning: (1) Specify an architecture whose bottleneck representation encodes the solution to the end task, and whose decoder's computational structure constrains synthesis of data from that representation. (2) Train the system end-to-end, from scratch, for generation alone. (3) Read off the bottleneck features as the solution. **Q: Are the results quantitatively compared with existing segmentation methods (e.g., DatasetGAN, etc.)?** A: Both DatasetGAN and DatasetDDPM are not unsupervised approaches like ours. They require annotations and train in a few shot manner. In the paper, we compared with DatasetDDPM, which is a more suitable baseline. As shown in Table 3 and 4, our method outperforms DatasetDDPM by a large margin under the same unsupervised setting. **Q: The ethical implications of using generative models to create harmful content should be addressed.** A: We are committed to ensuring that the development and application of generative models are guided by ethical principles. We will incorporate the discussion in the final version.
Summary: The paper proposed a structural modification of a DDPM which causes it to learn a decomposition of images into regions. This factorization enables unsupervised segmentation and simultaneously improves the quality of the generated images. The method is evaluated on various datasets, demonstrating its effectiveness in both tasks. Strengths: - Evaluations on 5 datasets. - Consistently good performance relative to other unsupervised methods. - Conceptually simple but powerful design. - Ablation on encoder design details. - Zero-shot segmentation shown on 2 additional datasets. Weaknesses: - Experiments only cover 2-3 class scenarios. - ImageNet results only on downsampled 64x64 images. Technical Quality: 4 Clarity: 4 Questions for Authors: - In addition to DiffuMask, please consider discussing DiffSeg (https://arxiv.org/pdf/2308.12469), which like DiffuMask, relies on a Stable Diffusion model, but unlike it, does output masks explicitly. - Is the proposed scheme specific to diffusion? Could it be applied in other autoencoder settings? - What happens if you set K to higher values, despite not needing the additional classes? e.g. K = 4, 5 for binary segmentation? - You mentioned that K = 3 was helpful for binary segmentation. How much worse were the results with K=2? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed explicitly in the paper. No concerns about negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # To Reviewer 9WST **Q: Experiments only cover 2-3 class scenarios.** A: Our work is at the stage of a new architectural design for diffusion-based segmentation and generation, with 2 or 3 class segmentation results demonstrating improvements across multiple datasets, scaling up to ImageNet. Training unconditioned DDPM from scratch for complex domains, e.g., COCO or PASCAL is challenging, even if limited to the goal of high-quality image generation. We are addressing an additional challenge of simultaneously generating a latent representation of segmentation. Our work proposes and validates new ideas on object-centric datasets. We believe there is a path toward scaling our method to more complex data; e.g., see the Appendix for our effort towards hierarchical segmentation. Fully exploring the new technique we have introduced will require follow-up papers. **Q: ImageNet results only on downsampled 64x64 images.** A: This choice is due to computational constraints and the need to validate our method's feasibility at a lower resolution before potentially scaling to higher resolutions. Training models on full-resolution (e.g., 256 $\times$ 256) ImageNet images would require significantly more computational resources and time. Resolution is an aspect of the system orthogonal to our architectural innovation. **Q: In addition to DiffuMask, please consider discussing DiffSeg, which like DiffuMask, relies on a Stable Diffusion model, but unlike it, does output masks explicitly.** A: Thanks for the suggestion. We will add the discussion about DiffuMask in the revised draft. **Q: Is the proposed scheme specific to diffusion? Could it be applied in other autoencoder settings?** A: While the details of our proposed scheme are tailored for diffusion models, our core principles could potentially be adapted to other autoencoder architectures, e.g., variational autoencoders (VAEs) or masked autoencoders (MAEs): - Bottleneck Design: The use of a structured bottleneck could be implemented in autoencoders to encourage learning of meaningful latent representations that aid in segmentation. - Parallel Decoding: The idea of parallel decoding for different segments could be utilized in the decoder portion of any autoencoder to encourage factorizing the reconstruction task into independent subtasks. **Q: What happens if you set K to higher values, despite not needing the additional classes? e.g., K = 4, 5 for binary segmentation?** A: $K$ is the maximum number of regions the model may use; it could learn fewer. Setting K to higher values may allow for some additional flexibility during training, as the model learns how to partition images into regions. Setting K much larger than necessary simply wastes compute as the model learns to leave some region maps empty (unused). During the rebuttal phase, we experiment with enlarging the flat set of $K$ regions to 5 for the CLEVR dataset, as shown in Figure 2 (rebuttal PDF). The model still performs foreground-background segmentation adequately by ignoring extra classes if not present. **Q: You mentioned that K = 3 was helpful for binary segmentation. How much worse were the results with K=2?** A: For binary segmentation, we found setting $K=3$ rather than $K=2$ to assist training, with learned regions emerging as foreground, background, and a contour or transition between the two, as shown in Figure 1 of the rebuttal PDF. Also, as shown in Table 1 in the rebuttal material, the results with K=2 were less satisfactory for both segmentation and generation. With the goal of handling more regions and multiscale structure, we believe a more promising future investigation is to reorganize our architectural design to support hierarchical mask factorization in place of a flat set of $K$ regions, as shown in Appendix Section A.3. --- Rebuttal Comment 1.1: Comment: Thank you for thoroughly responding to my questions. I understand the concerns about computational resources -- mentioning this explicitly in the paper would be helpful. Similarly, the fact that the general idea is potentially applicable to other architectures might warrant a short comment in the discussion section. Regarding K=2, I was surprised to see the magnitude of the impact on the metrics. Do you have an intuitive explanation for this? Is it the case that with K=2 the shapes of the masks do not adhere well to the contours of the underlying objects, which K>2 helps to alleviate? --- Reply to Comment 1.1.1: Title: To Reviewer 9WST Comment: We will update our discussion to include computational requirements and potential applicability to other autoencoder frameworks. **Q: Regarding K=2 and impact on the metrics** The choice of K not only defines an architectural structure for inference, but one that comprises the model during training; K is the maximum number of components that the model may utilize at any point in training or inference. We train from scratch, so at the start of training, the assignment of pixels to components will be a result of random initialization. Training via gradient descent must find a path from this random configuration to a parameter configuration which yields structured region masks. It might be helpful to have more components (more masks) to utilize during training in order to smooth the optimization landscape, even if by the completion of training not all masks are needed. An analogy is overparameterization benefiting neural network training, even though networks are subsequently amenable to pruning. Consistent with this hypothesis is the observation that our K = 3 experiments for objects vs background datasets show the third component emerging as a contour or transition between the object and background (Figure 1 in rebuttal PDF). To provide further analysis, we will add visualization of region mask evolution over the duration of training to the Appendix.
Rebuttal 1: Rebuttal: # General Reply We thank the reviewers and answer specific questions in individual responses to each review below. We provide some general clarifications here, as well as specific clarifications below. Our goal is both image segmentation and image generation, learned simultaneously and in an entirely unsupervised manner. Our system resembles a standard DDPM, but innovates on the neural network architecture inside that DDPM. Our novel network architecture is designed around heterogeneous subcomponents which have the emergent effect of encouraging the DDPM to factorize the denoising process into parallel subproblems and allow us to easily examine the learned factorization. For images, it happens that the natural factorization, simply as a consequence of the structure of the data, corresponds to region segmentation. Thus, as a result of learning to generate (denoise) with this particular network architecture, we also learn to segment. Once trained, our system can operate in two modes: segment an arbitrary input image, or generate an image from random noise. Segmenting a novel input image is fast (single forward pass of a UNet, comparable in speed to any other image segmentation network); generation is slow (many steps of a reverse diffusion process, just as in a standard DDPM). Our work represents the first example of a new strategy for leveraging diffusion models to learn other latent information in an unsupervised manner: constrain the generation (denoising) architecture to have a computational structure and representation bottleneck that reveals that desired latent information. In our case, the latent information is image segmentation, the bottleneck is feature tensors restricted to region masks, and the computational structure is parallel pathways processing those masked feature tensors. If this recipe proves to be general, it should inspire a new strategy for the architectural design of future foundation models. Pdf: /pdf/ae000bdb6bbd180ac7472658678d3f564cc5d602.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Harnessing Multiple Correlated Networks for Exact Community Recovery
Accept (poster)
Summary: The paper studies the problem of exact community detection in correlated stochastic block models (with two symmetric communities). More precisely, a graph is sampled from an SBM with parameters p and q and each edge in the graph is then kept with probability q. This downsampling is performed K times independently at random to obtain K different graphs. Each graph is then randomly permuted. From these K correlated sample we want to recover the communities of the model. The authors provide a necessary and sufficient condition on the parameters p,q,k, and s for exact recovery. In particular, they show there exists a regime of p,q,s where K-1 graphs are not enough to recover exactly the communities, but K samples are enough. In particular, there exists a regime in which K samples might not even be enough to solve graph matching perfectly (i.e., recover the random permutation). Therefore, it is necessary to aggregate imperfect matching information with techniques for community detection. The case for K=2 has been solved by Gaudio, Rácz, and Sridhar. This paper extends their result for any fixed K>=2. Strengths: I think the problem is a natural and interesting one. The extension from the case of two to more correlated graphs appears highly nontrivial, since there are many ways in which one could try and resolve potentially conflicting information arising from matching different pairs of graphs. The paper is also well written, both in explaining the background and motivation to the problem, and also when giving a flavour of the techniques used in the analysis. Weaknesses: Perhaps the main weakness of the paper is that the analysis is not algorithmic, in the sense that there is not an efficient algorithm able to recover the communities. This is, however, more an invitation to future work rather than a flaw of the paper itself. The other point I'd like to raise is that the proofs in this paper are fairly lengthy and involved. Is NeurIPS the best venue for these kind of papers? I believe the paper will be interesting to the NeurIPS community and that's why I recommend acceptance, but in a perfect world I'd like results of this kind to be also checked for correctness (disclaimer: I did not read the appendix). Technical Quality: 4 Clarity: 4 Questions for Authors: Just a few questions that are essentially out of curiosity: 1. Can some of these techniques be used to analyse the sparse case? Of course we wouldn't aim for exact recovery, but rather partial recovery, and we would need to use different algorithms to obtain a good community labelling of the individual graphs. 1. I guess the limit of the recovery probability depends on K and that's the reason K needs to be constant? How bad is its dependency on K? Can K diverge at least slightly? 1. I know this is not the point of this paper, but I wonder if there is a practical application of the techniques developed in the paper. It'd be quite interesting to try and apply them to obtain an algorithm that work in practice, even without theoretical guarantees. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! We fully agree with the reviewer that the main open problem that our work leaves open is to develop efficient algorithms for the setting studied in our paper (whenever this is possible). The quest for efficient algorithms has been a major driving force behind the recent surge of papers on random graph matching (see Section 5), culminating in the breakthrough works [27] and [28] on correlated Erdos--Renyi random graphs. These works give promise that it is possible to develop efficient algorithms in our setting (perhaps with some conditions on the correlation parameter $s$). Our work lays the groundwork by determining the information-theoretic limits, which provide targets for efficient algorithms to achieve. Q1 (sparse setting): Weak recovery in the constant average degree regime using correlated graphs is indeed a fascinating direction for future work (as already suggested in [18]). We suspect that the $k$-core analysis still gives something nontrivial in the large but constant average degree regime. However, proving this is far from immediate, due to a lack of concentration at this scale. Furthermore, determining the precise threshold for weak/partial recovery seems like a challenging problem which requires many more new ideas. Q2 ($K$ diverging with $n$): An inspection of our proofs shows that our results hold also when $K$ diverges (slowly) with $n$. We have not made an attempt to see how fast $K$ can grow with $n$ to have the results still hold, in part because the conditions (3.1) and (3.2) converge exponentially in $K$, so they are already very close to their limiting values for large constant $K$. That said, as an example, let us consider one particular aspect of the proof -- the correctness of the $k$-core estimator -- and see what we may allow $K$ to be here. Lemma F.4 says that the result holds with high probability; in other words, that the error probability is $o(1)$. A careful inspection of the proof of Lemma 4.8 in [18] actually shows that the error probability is bounded by $n^{-0.16}$ (moreover, if $k$ in the $k$-core is chosen to be a large constant, then this error probability can be made to be an arbitrarily small inverse polynomial in $n$). Since in a union bound over $K$ graphs we gain a factor of at most $K^{2}$, this shows that this lemma allows $K$ to grow polynomially in $n$. Of course, there are other aspects of the proof where the dependence on $K$ of the error probabilities have to be checked, and together these can give a full answer of how fast $K$ can grow with $n$. Q3 (practical applications): We fully agree with the reviewer and we hope that such applications will arise in the future, motivated by our (theoretical) work. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I agree that the fact that the threshold depends exponentially on K makes handling the non-constant case not super important, but I appreciated the explanation.
Summary: Given K correlated SBMs, the authors derive information-theoretic conditions for (i) the exact recovery of the community structure and (ii) the perfect recovery of the planted alignment $\pi^*$. Strengths: The paper is well-written, and I enjoyed reading it. It generalizes [37] and [18] to multiple (K \ge 3) SBMs. The interplay between community recovery and graph alignment by combining the information from the K graphs is well-explained and well-executed. Weaknesses: No major weaknesses, the paper opens and closes the problem it intends to solve. The discussion section lays directions for interesting future works. I hesitate between 7 and 8. But for a higher grade, I would have liked to see a bit more (such as graphs with 2 communities of different sizes, or more than 2 communities, which I believe is not that much harder). In any case, the paper is a clear accept. Minor comments: * Since the authors consider SBM with two balanced communities and edge probabilities p&q, the term planted partition model may be more adapted. * In the planted partition model, the key information-theoretic quantity for exact recovery is the Rényi divergence of order 1/2 between two Bernoulli distributions. The CH divergence is only needed for SBM with general connection probabilities and/or block sizes. * Typo line 143: "for K \ge graphs", 3 is missing. * It may be useful to define earlier "almost exact community labeling" and "partial almost exact graph matching" (which I believe are accounted first in lines 179 and 180, and not before, and define only lines 241). Technical Quality: 4 Clarity: 4 Questions for Authors: * Can you elaborate on why k-core matching is a good choice? In particular, going over the proof, it appears that k = 13 is used (Lemmas F.4, F.6), is the choice of k important? * Is there a low-degree hardness conjecture for the alignment of correlated SBMs? * What happens when $K$ grows unbounded? * Can one conjecture what happens with k \ge 3 communities of equal size? Going over ref [43], it seems natural to replace the quantity T_c by (a+(k-1)b) / k (which by the way, multiplied by \log n, is the average degree) and replace D_+ by (\sqrt(a) - \sqrt(b) )^2 / k. * With k=2 communities of different sizes, could one get a different story than simply replacing the D_+ with the CH divergence? (a naive thinking: the step of almost exact recovery provides 2 communities of sizes n_1 < n_2 for G_1 and of sizes n_1' < n_2' for G_2'; thus I can already map the community of size n_1 with the one of size n_1'. Does it help?). Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The theoretical results and their assumptions are clearly stated. The work is purely theoretical and does not require more discussion on societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! We agree with the reviewer that it is natural to consider more general settings, such as more than two communities, as well as unbalanced communities. We suspect that our methods can be extended to these settings as well. However, in the present paper we decided to keep the setting as simple as possible, to focus on the effect of more than two graphs. Thank you for your comments on the planted partition model and the Renyi divergence; we will add these terminologies to the paper. Thank you also for spotting that typo, and for suggesting to move forward some definitions; we will make these edits in the revision. Q1 (on the $k$-core matching): The general intuition behind why the $k$-core matching performs well is as follows. The $k$-core of a graph captures its "central" part, in some sense. Thus if a vertex is part of the $k$-core of the intersection graph, then it has many common connections in the two graphs, so it is likely that these can help find the correct match for this vertex. The choice of $k=13$ is not particularly important -- any large enough constant will work. The particular choice of $k=13$ comes from [18], where it arises in their Lemma 4.8 and Lemma 4.10. In particular, Lemma 4.10 is a probability tail bound, and the choice of $k \geq 13$ (and in particular $k=13$) guarantees that this vanishes sufficiently quickly. The proof of Lemma 4.10 contains a union bound that may be loose, so it may be possible that the value of $k$ can be further lowered, but this doesn't have any major importance. Q2 (low-degree hardness): Yes, we conjecture that there is an information-computation gap for matching correlated SBMs, and we conjecture that this is the same as for matching correlated Erdos--Renyi random graphs, as in [28]. Namely, let $\alpha \approx 0.338$ denote Otter's tree counting constant [28]. We conjecture that if graph matching is information-theoretically feasible and $s^2 > \alpha$, then there is an efficient algorithm to do so. On the other hand, if $s^2 < \alpha$, then there does not exist such an efficient algorithm. Investigating this question is beyond the scope of the current paper. Q3 ($K$ unbounded): As $K \to \infty$ (and assuming $s \in (0,1)$), the condition (3.1) converges to the condition that exact community recovery is possible in the parent graph $G$, while the condition (3.2) converges to the condition that $G_{1}$ is connected. The convergence in both cases is exponential in $K$, so for large constant $K$ the conditions will already be very close to the limiting conditions. An inspection of our proofs shows that our results also hold for $K$ growing slowly with $n$; we haven't tried to optimize the bounds to allow $K$ to grow as fast as possible. Q4 (3+ symmetric communities): We fully agree with the reviewer's conjecture (which generalizes the conjecture of reference [43] for $K=2$). We suspect that our methods can be extended to show this. However, in the present paper our aim was to focus on the effect of the number of graphs $K$, so we decided to keep other aspects of the setting as simple as possible. This is an interesting question for future work. Q5 (two unbalanced communities): This is a natural and intriguing question. However, we believe that unbalanced communities do not help. Note that while unbalanced communities allow the communities to be matched, what we really need is to match individual vertices, and this is not readily done simply by virtue of the communities being unbalanced. Note also that after the partial matching and the initial majority votes, all nodes that have been matched have a correct estimate of their community. However, the condition (3.2) really comes from the (number of) unmatched vertices, and the related genie-aided argument, which gives the CH divergence. Of course, the arguments above are not a proof, just some heuristics, and we leave resolving this is as an interesting open question for the future. --- Rebuttal Comment 1.1: Comment: Thank you for your answers! It answers my questions very well. My overall rating remains unchanged, I recommend acceptance of the paper.
Summary: The paper studies the problem of exact community recovery from multiple ($K$) correlated graphs in 2-community balanced symmetric SBM. Prior work of [Gaudio, Ra\'cz, and Sridhar 2022] for $K=2$ case. The paper generalizes their result for any $K$ (constant) number of graphs. In particular, the main result of the paper determines a sharp information-theoretic threshold in terms of $a,b,s$ (correlated SBM parameters) and $K$ such that 1. (Theorem 1) Above the threshold, the optimal MAP estimator (not efficient though) achieves exact recovery with high probability. To show this, the main challenge is to combine the information from more than two networks when none of the pairs can be matched exactly. 2. (Theorem 2) Below the threshold, any estimator fails to exactly recover the communities with high probability. In particular, some interesting highlights from their results are that there is a region of parameter $(a,b,s)$ such that exact recovery is (i) impossible using $K-1$ graphs but possible using $K$ graphs AND (ii) matching the vertex labels of any of the graph pair is impossible. Strengths: 1. The paper provides a clean characterization of the precise information theoretic limit for an important problem in exact community recovery literature. 2. The paper is well-written with clear intuitions of interplay between exact recovery and graph matching. Weaknesses: I do not see any major weaknesses in the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: What do authors think about the challenges in handling weak recovery in a constant degree regime using correlated graphs? For weak recovery, it would be only good enough to create a partial overlap. The rest of the questions are to clarify my understanding of Figure 2. 1. Why pink region is not a finding of this paper (it only mentions violet)? Would it be correct to say that one of the findings is to characterize the boundary between pink and violet region, i.e. how the union of violet and pink splits into the two regions? 2. Is the only difference between the pink and yellow regions that in the former, exact recovery would have been possible if any pair of matchings were known, but in the latter one needs all pairwise matchings? Otherwise, the two regions have the same properties: e.g. no pairwise matching is possible, not possible to recover the community using any individual pairs, yet possible using three graphs. Also, does the region of parameters given by pink+yellow exactly correspond to the region mentioned in abstract lines 12-15 (and as the highlights in the summary of the review). Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, the authors list out both negative and positive societal impacts of their work and graph matching in general. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! Weak recovery in the constant average degree regime using correlated graphs is indeed a fascinating direction for future work (as already suggested in [18]). We suspect that the $k$-core analysis still gives something nontrivial in the large but constant average degree regime. However, proving this is far from immediate, due to a lack of concentration at this scale. Furthermore, determining the precise threshold for weak/partial recovery seems like a challenging problem which requires many more new ideas. Regarding Figure 2, indeed, the reviewer is completely right: characterizing both the pink and the violet regions, and the boundary between them, are all important findings of the paper. Previously, [18] characterized the union of these two regions. We will correct the caption of Figure 2, please see the attachment to the "global" response/rebuttal for an updated version. Thank you for catching this. Regarding the second set of questions about Figure 2, we hope that the "global" response/rebuttal -- which describes the threshold for exact graph matching from $K$ graphs (not just pairwise exact graph matching) -- helps clarify things. In particular, the previously yellow region now splits into an updated yellow region and a grey region. It is indeed the case that the union of the pink and the updated yellow regions is the region described in lines 12-15 of the abstract. We will add a sentence along these lines to the text in order to aid the reader. --- Rebuttal Comment 1.1: Comment: Thanks for providing clarifications! I would be happy to see this paper getting accepted.
Summary: Theoretical work showing conditions for exact community recovery in $K$ correlated $2$-community SBMs where node labels are not maintained between networks. The work extends previous work for $K=2$ which introduces new challenges and proof mechanisms to allow for $K \ge 3$ networks. Theorems 1 and 2 provide necessary and sufficient conditions for exact community recovery relating to the difficulty of the pairwise matching problem $T_c(a, b)$ and the individual exact community recovery problem $D_+(a,b)$. Strengths: This research fully answers the open question of [18] introducing novel ideas top extend existing techniques to work in the much more complicated scenario with $K \ge 3$ networks. I feel the paper was well-structured to introduce and explain the problem at hand to someone outside of the research area. Section 2 did an excellent job introducing the problem highlighting the interplay between the community recovery and graph matching. Section 3 giving the main results along with a helpful high level description of the algorithm used within the proof and Figure 2 demonstrates these results pictorially highlighting the regions where their research come into play. Section 4 gave more details of the proof, while still a bit technically in places was still useful for myself who wanted to get the vibe of the proof without delving into the details in the Appendix. The paper is very clear in its goal, what is has achieved and possible future directions in this area. Weaknesses: While reading this paper, I was unsure whether this paper fitted the remit of NeurIPS and would be better suited in another journal rather than conference paper. I felt more comfortable about this upon noticing that [18], which supplied the problem for this work, was itself inspired by work of community recovery in correlated SBMs [37] published in NeurIPS. I recognise this is a personal bias of what I consider a NeurIPS paper. There is some discussion of how this work relates to real-world networks, but it is unclear how much further research needs to be done before these ideas can be used for practical analysis of real graphs. A number of my questions below relate to applying these ideas to more realistic networks. Technical Quality: 3 Clarity: 3 Questions for Authors: How does changing the correlated SBM to allow for different subsampling probabilities $s_i$ for each network affect the theory? Is it possible to do just exact graph matching better than $s^2 T_c(a,b) > 1$ for $K \ge 3$ graphs? Perhaps this is addresses in some of the referenced papers, but my reading of Lines 128-129 is that exact graph matching is done pairwise for any graphs $G_k$ and $G_l$ which means that all graphs can be paired. Is there any improvement by doing things other than pairwise? I believe the answer reading on later is no, at least asymptotically, but it is uncertain later whether the benefit of extra graphs is solely for graph matching and community detection together, rather than individually. This may be addressed by resolving whether the "if if" typo in Line 128 should be just a single "if" or, in fact, "if and only if". While it is useful to consider the extreme case, often there is some evidence of persistent node labelling across networks. For example, Joe Bloggs on Facebook is more likely to correspond to the email joe_bloggs_01@mail.com compared to a completely random node. How could this scenario be incorporated into graph matching and community detection? Do Steps 3-5 in the algorithm still work if $a < b$? Obviously, there is a symmetry in these two parameters, but as the algorithm is written, in this scenario neighbours are more likely to be the other community rather than the same. Please can you explain the changes necessary to make this approach work when $a < b$? What subsampling parameter $s$ should we expect in real-world networks? Figure 2 shows two possible values but I have no idea what I would expect as normal. Given networks $G_1, \ldots, G_K$ one could find a maximum likelihood estimator for $s$, perhaps by computing the true $G$ using the graph matching algorithms described here and finding the subsample rate. Any intuition why the value $k = 13$ in the $k$-core algorithm in Line 244? How do these ideas relate to subsampling from any arbitrary base graph $G$ rather than just a 2-community SBM? For example, if $G$ was sampled from a generalised random dot product graph (A statistical interpretation of spectral embedding: the generalised random dot product graph, Rubin-Delanchy et al), then the subsampled graphs $G_i$ would also be a GRDPG. The problem of community recovery may not always make sense in that setting, but graph matching is still very important. An alternative way of subsampling graphs is EdgeFlip used for edge-differential privacy in networks (Sharing social network data: differentially private estimation of exponential-family random graph models, Karwa et al). Edges and non-edges are flipped in $G$ with some probability $p$. How do these techniques work using this method of generating anomalised networks? The results here could show the dangers of producing multiple anomalous versions of the same network, a potentially important result for data privacy. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors highlight the positive and negative impact of their results, particular for de-anomalising multiple networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! Q1 (different subsampling probabilities): This is a very natural question, and our developed theory can fully handle this. We simply chose to keep this part of the model simple, since there are already several parameters involved in the model. Let us illustrate on Theorem 1 how the results change with different subsampling probabilities $\{s_i\}$. First, condition (3.1) becomes $(1-\prod_{i=1}^{K}(1-s_i)) D_{+}(a,b)$. This change is immediate, since $(1-\prod_{i=1}^{K}(1-s_i))$ is the probability that an edge in the parent graph survives in at least one of the subsampled graphs, so this quantity arises in the correctly matched union graph (from which (3.1) is derived). More interesting is how (3.2) changes with different subsampling probabilities. An immediate adaptation of our proof gives the sufficient condition $\left[s_1\left(1-\prod_{i=2}^{K}(1-s_i)\right)\right]T_{c}(a,b)+\left[s_{1}\prod_{i=2}^{K}(1-s_i)\right]D_{+}(a,b)>1$. However, this condition is not necessary. The tight sufficient condition turns out to be: $\min_{j \in [K]} \left(\left[s_j\left(1-\prod_{i=1, i \neq j}^{K}(1-s_i) \right)\right]T_{c}(a,b)+\left[s_{j}\prod_{i=1, i \neq j}^{K}(1-s_i)\right]D_{+}(a,b)\right)>1$. The basic idea is as follows. Suppose that $j^*$ minimizes the expression above. Then, under this condition, we can first exactly recover the communities in $G_{j^*}$ (using the same algorithm as in our proof). Subsequently, we can port this partition to $G_1$ using the ideas in Appendices G and I. In the revision, we will add a remark to the paper along the lines above. Q2 (exact graph matching threshold): This is an excellent question and indeed it is possible to do exact graph matching better than just pairwise using $K \geq 3$ graphs. The exact graph matching threshold given $K$ correlated SBMs is given by $s\left(1-\left(1-s\right)^{K-1}\right)T_{c}(a,b)=1$. We expand upon this in detail in the "global" response/rebuttal, please see there for more. Q3 ("Joe Bloggs"): This is an astute observation and there are indeed various ways to incorporate this into a model, depending on the situation. One is to consider "seeded" graph matching, where a subset of vertices in the graphs are already matched, and the task is to match the remaining vertices. This setting is sometimes closer to practical applications, and is widely studied (e.g., [33], [45]). Another possibility is to consider "attributed" graph matching, where each node has a corresponding vector of attributes, and these are correlated across the graphs. See, e.g., Zhang, Wang, Wang, and Wang (IEEE Trans. on Information Theory, 2024) and Wang, Wang, and Wang (COLT 2024). Of course, these two modeling frameworks can be combined, and others may be relevant as well, depending on the application. All of these are worth more detailed study in our setting; however, this is beyond the scope of the current paper. We will add these to the possible future directions mentioned in the paper. Q4 (when $a<b$): Yes, everything works also when $a<b$, simply by replacing "majority" with "minority" everywhere. This is correctly stated in the Algorithms stated in Appendix C. In lines 173--192 of the main text, we gave a high level overview, which is perhaps more intuitive in the assortative setting when $a>b$. We forgot to mention here that this description is for $a>b$; we will correct this in the revision. Thank you for catching this omission. Q5 (subsampling parameter in real-world networks): We suspect that the answer may vary widely across applications. As an example, in a recent work by Li, Arroyo, Pantazis, Lyzinski (IEEE TNSE, 2023), the authors develop both theory and also apply it to human connectomes. While their paper does not indicate what the correlation parameter may be in this application, their theory and simulations are for constant correlation, and they vary the correlation parameter between 0 and 1. Likewise, our theory holds for any constant correlation $s\in [0,1]$; we hope that this can thus be useful in applications, regardless of what the actual correlation parameter is. Q6 (the value of $k$ in the $k$-core): The choice of $k=13$ is not particularly important -- any large enough constant will work. The particular choice of $k=13$ comes from [18], where it arises in their Lemma 4.8 and Lemma 4.10. In particular, Lemma 4.10 is a probability tail bound, and the choice of $k\geq 13$ (and in particular $k=13$) guarantees that this vanishes sufficiently quickly. The proof of Lemma 4.10 contains a union bound that may be loose, so it may be possible that the value of $k$ can be further lowered, but this doesn't have any major importance. Q7 (subsampling from an arbitrary base graph or a GRDPG): This is an excellent direction for future research. We suspect that the $k$-core analysis should also work for understanding (the information-theoretic limits of) graph matching of correlated GRDPGs, and the results of [39] support this heuristic as well. However, this is by no means obvious, and requires its own careful analysis. The setting of an arbitrary base graph is much more challenging. This essentially corresponds to a "smoothed analysis" of graph matching, which has not been done before. This would be very interesting to consider, but it is significantly beyond the scope of this paper. Q8 (alternative sampling using EdgeFlip): This setting is also interesting, and somewhat resembles the alternative setting we describe at the end of the paper (lines 365--372). We suspect that our methods are able to handle such a setting as well. In particular, for a constant number of graphs $K$, we conjecture that the results would be quantitatively different but qualitatively similar to the ones in the current paper. However, if $K$ is large (diverging with $n$), then we expect new qualitative phenomena to appear. This is an exciting avenue for future research, which we plan to pursue. --- Rebuttal Comment 1.1: Comment: Thank you for your responses to my questions, several of which I accept are beyond the scope of this paper (e.q. Q7 and 8), but were of interest to me. I have increased my score of this paper as a result.
Rebuttal 1: Rebuttal: Thanks to all reviewers for the careful reviews and many helpful comments! We respond to each review separately in its own rebuttal. In this "global" response, we take the opportunity to discuss something that was brought up by multiple reviewers: the threshold for exact graph matching given $K$ correlated SBMs. In the submitted manuscript (on page 4), we mention that the information-theoretic threshold for exact graph matching for two correlated SBMs is $s^{2} T_{c}(a,b) = 1$ (by [37]). So, in particular, by a union bound, above this threshold one can do pairwise exact graph matching for any constant number $K$ correlated SBMs. However, this does not imply that this is the threshold for exact graph matching given $K$ correlated SBMs (it only gives a one-sided bound). And indeed: the threshold for exact graph matching given $K$ correlated SBMs is given by a different threshold, namely $s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b) = 1$. More formally, we have the following two theorems: Theorem: Fix constants $a,b>0$ and $s\in [0,1]$, and let $(G_{1},G_{2},\ldots,G_{K})\sim \mathrm{CSBM}(n,\frac{a\log n}{n},\frac{b\log n}{n},s)$. Suppose that $s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b) > 1$. Then exact graph matching is possible. That is, there exists an estimator $\widehat{\pi}=\widehat{\pi}(G_1,\ldots,G_K)$ such that $\lim_{n\to\infty}\mathbb{P}(\widehat{\pi}=\pi^*)=1$. Theorem: Fix constants $a,b>0$ and $s\in [0,1]$, and let $(G_{1},G_{2},\ldots,G_{K})\sim \mathrm{CSBM}(n,\frac{a\log n}{n},\frac{b\log n}{n},s)$. Suppose that $s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b) < 1$. Then exact graph matching is impossible. That is, for every estimator $\widehat{\pi}=\widehat{\pi}(G_1,\ldots,G_K)$ we have that $\lim_{n\to\infty}\mathbb{P}(\widehat{\pi}=\pi^*)=0$. Note, in particular, that for $K \geq 3$ there exists a regime (specifically given by $s^{2} T_{c}(a,b) < 1 < s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b)$) where exact graph matching is possible from $K$ correlated SBMs even though pairwise exact graph matching is impossible. We emphasize that the proofs of these theorems (which we sketch below) are already implicit in our submitted manuscript; in fact, our proofs for community exact recovery go significantly beyond these. In our original submitted manuscript we didn't mention these theorems, in part since our focus is on exact community recovery, and in part due to length constraints. Your feedback has made us realize that we should indeed include these in the paper. We plan on adding these in the revised version of the manuscript (with the statements in the main text, using the additional allowed page, and proofs in the appendix). It is worth highlighting this threshold in the phase diagrams as well. We have attached here an updated version of Figure 2 with this incorporated. The change is that the previously yellow region (in the submitted version) now breaks into two regions: a yellow region and a grey region. In the grey region, exact community recovery is impossible from $(G_{1}, G_{2})$, pairwise exact graph matching is also impossible, but exact graph matching given $(G_{1}, G_{2}, G_{3})$ is possible, and subsequently exact community recovery is possible from $(G_1, G_2, G_3)$. In the (updated) yellow region, exact community recovery is impossible from $(G_{1}, G_{2})$, exact graph matching given $(G_{1}, G_{2}, G_{3})$ is also impossible, yet exact community recovery is possible given $(G_1,G_2,G_3)$. We now sketch the proofs, starting with the impossibility result. Suppose that we give the algorithm even more information, namely all the matchings between the graphs $G_2, \ldots, G_K$. Then the correctly matched union graph $G_2 \vee \ldots \vee G_K$ can be computed (and, by a simulation argument (similar to the proof of Theorem 3.4 in [37]) it suffices to only consider this union graph). Note that $G_2 \vee \ldots \vee G_K$ is an SBM with parameters $(n, \left( 1 - \left( 1 - s \right)^{K-1} \right) a \log(n)/n, \left( 1 - \left( 1 - s \right)^{K-1} \right) b \log(n)/n)$. By a result of Cullina, Singhal, Kiyavash, and Mittal (2016), it is impossible to exactly recover the matching between an aforementioned SBM and an SBM with parameters $(n, s a \log(n)/n, s b \log(n)/n)$ under the condition $s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b) < 1$. For the possibility result, the key is Appendix I. Under the condition $s \left( 1 - \left( 1 - s \right)^{K-1} \right) T_{c}(a,b) > 1$, one can see that the conclusion of Lemma I.2 is that the intersection set of interest is empty, which implies that all vertices are "good" vertices. The algorithm described in Appendix I then shows how to recover the latent matchings. Pdf: /pdf/810a0e021148df8ecf8b1a792359e4f2b8efc165.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
Accept (poster)
Summary: Motivated by the need to censor dangerous knowledge in an LLM training corpus, the paper proposes to study LLMs' ability to infer explicit information when finetuned solely on implicit evidence, which is named inductive out-of-context reasoning. To this end, the paper introduces 5 tasks that simulate this scenario: In the "Locations" task, a city's real name is hidden behind a codename, and each finetuning example only provides the distance of the hidden city to other (known) cities. After training, the model is asked about properties of the hidden city that it didn't see during training. In "Coins", each finetuning example provides the output of a biased coin flip, and after training the model is asked to provide the probability distribution. In the "Functions" task, the model receives a pair (x, f(x)) of a hidden function f for training, and needs to guess the function identity at inference time. In "Mixture of Functions", the training examples again contain (x, f_i(y)) pairs, but this time with multiple functions f_i, which are not specified in the example. At inference time, the model needs to list all functions. Finally, in "Parity Learning", the model receives a boolean expression with multiple variables of unknown value and needs to guess the values of variables at inference time. Experiments are performed by a) finetuning GPT-3.5 and GPT-4 via the OpenAI API or b) in-context learning of GPT-3.5. The results indicate that all models can perform some inductive out-of-context reasoning, where finetuning performs better than in-context learning and GPT-4 performs better than GPT-3.5. Strengths: * The paper studies an important problem that is relevant to the broader NeurIPS community. * It proposes an intuitive and reasonable evaluation framework. * The paper is well-written. * The results are very interesting. * The level of rigor and detail is impressive. Weaknesses: The fact that most experiments are run using the opaque OpenAI API is the only major weakness. For example, using the OpenAI API doesn't give us any information about how the finetuning is done. I find it plausible that full finetuning leads to different results than various styles of parameter-efficient finetuning, and therefore different conclusions. Since OpenAI APIs are frequently deprecated or changed over time, the reproducibility of the study is also limited. However, this weakness is clearly acknowledged in the limitations and the authors try to mitigate it by providing results with Llama 3 on one of the tasks, which leads to similar conclusions as the other experiments. Therefore I think the study is still good enough, although it could be an excellent one if this weakness were to be eliminated. Technical Quality: 3 Clarity: 4 Questions for Authors: Questions: * For the location task well-known cities were chosen. Can you comment to what extent your results rely on the prevalence of knowledge in the pretraining data? * Figure 7 shows large discrepancies depending on the type of function. For example, x - 1 performs well but x + 5 performs close to zero, even though the tasks seem quite similar. Can you comment on what factors you think determines the quality? Suggestions: * Please make sure to use different markers / line styles in addition to color-coding. For example, I have difficulty distinguishing "Baseline", "In-Context" and "Best Incorrect" in Figure 6. * I'd suggest to move the limitations into its own "Limitations" section at the end of the paper (which doesn't count toward the page limit). This frees up space in the discussion section to elaborate on the differing levels of observerd OOCR depending on the task. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations are sufficiently addressed in the discussion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and feedback. Here, we respond to the questions brought up by the reviewer: **W1:** *Opaqueness of OpenAI API fine-tuning.* **Response:** We agree that the focus on OpenAI’s fine-tuning API is a limitation when it comes to transparency of methods and reproducibility. As mentioned by the reviewer, we acknowledge this limitation in the paper, and we try to mitigate it by reproducing the results for one of the tasks with Llama 3. We want to highlight that the OpenAI API allows research groups with less resources to perform experiments on cutting-edge models, which would otherwise not be possible. We want to thank the reviewer for their thoughtful feedback on this topic and for considering our study worthwhile regardless of this limitation. **Q1:** *For the location task well-known cities were chosen. Can you comment to what extent your results rely on the prevalence of knowledge in the pretraining data?* **Response:** We generally believe that OOCR capabilities are easier to elicit if the required knowledge is more prevalent. Across our tasks, we find that more prevalent and common and simple concepts are easier to learn. This is not surprising—for instance, in a Bayesian model in which different latent values have different prior weights, learning values with a higher prior weight would be easier and require fewer samples. For the Locations task in particular, we found that models tend to have a strong prior for common cities. For example, if Ankara is the true unknown city location, fine-tuned GPT-3.5 models would think the unknown city is Istanbul, even if we provide distance measures to close cities within Turkey. We believe that there is a strong pretraining bias preventing learning the right city. To get around this, we chose several large and popular cities. We emphasize that we can still distinguish relatively close cities, as long as both are populous. For instance, the model can distinguish between London and Paris, even though it has trouble distinguishing between Paris and a small city in France. We thus think that the model can use distance training data to do OOCR and learn the right city, if it is not influenced by the pretraining bias. **Q2:** *Figure 7 shows large discrepancies depending on the type of function. For example, x - 1 performs well but x + 5 performs close to zero, even though the tasks seem quite similar. Can you comment on what factors you think determines the quality?* **Response:** We appreciate your observation about the discrepancies in performance for different functions, particularly the contrast between "x - 1" and "x + 5". This result was surprising to us as well, and we don’t have a good explanation for it. However, here are some thoughts based on our analysis: 1. Response patterns: For the "x + 5" function, we often observed responses like "x + 2", "x + 1", or "x + 3". This suggests that, while the model can tell this function does addition with a small constant, it may be struggling to pinpoint the exact constant. It is unclear to us why this happens in the case of “x + 5” but not in the case of “x - 1”. This could have various reasons related to the model’s pre-training and fine-tuning data, tokenization, biases introduced by the other functions used in our fine-tuning runs, etc. 2. Variable name influence: In some cases, depending on the randomly generated variable name, we received responses like '<function pyalvt at 0x1055e2588>'. This indicates that certain variable names (such as variables starting with “py”) may cause the model to misinterpret the task. 3. Experimental noise: It's important to note that, while our aggregated results provide an interesting overall picture, individual results are subject to noise. Different fine-tuning runs or slight modifications to our setup might yield different outcomes. **Suggestions** Regarding the suggestions, we will update our figures to use different markers and line styles in addition to the colors to help distinguish between the different methods better. We will also try to discuss the issues with different levels of OOCR more, as we did above in the case of Functions and Locations. Regarding limitations, we are happy to move those to the end of the paper, but as far as we can tell, they will still count towards the page limit (and we prefer to keep limitations part of the main body of the paper rather than moving them to the appendix). --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I sympathize with the argument that using the API allows researches with a lacking infrastructure to work on cutting-edge LLMs as well. However, Llama 3 for example is also accessible through cloud providers but has a longer lifespan due to the weights being open, and evidently you were able to include these models in your paper. I think the reproducibility of your paper would be greatly increased if the main body would be focused on these models rather than the closed models. Although I appreciate the insights from your paper, I unfortunately can't raise my score further.
Summary: This paper study inductive out-of-context reasoning, which is one kind of generalization where LLMs may infer latent information by aggregating training data and apply the latent conclusions to the downstream tasks without the ICL. Strengths: 1. This study focuses on a possible risk inside the LLMs that LLMs may cencor dangerous knowledge from the training data. 2. This study encloses quite comprehensive experiments to prove the proposed risk. Weaknesses: 1. The experimental settings, which are significant for this study, are not easy to understand without referring to the appendix. A good paper should be self-contained without the appendix. 2. The risk of OOCR is not convincing enough. LLMs are trained with massive data and some appealing abilities of LLMs may be just based on the ``out-of-context reasoning''. What do the authors think about the inductive bias and OOCR? In another way, several examples of the potential risks of OOCR can be enlightening. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. If OOCR could be taken as a risk, is it possible that future endeavor to alleviate that may lead to the decrease on the coreference ability of LLMs? 2. In the caption of Figure 3, why use random strings like `rkadzu'? 3. Line 100, how to generate the evaluative questions? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Here, we respond to the weaknesses and questions brought up by the reviewer: **W1:** *paper is hard to understand without looking at appendix* **Response:** Thank you for the feedback. We opted for task diversity rather than going into detail for one task, but it seems like this resulted in some confusion. We will include more details in the main paper for the camera ready (since we will be allowed 1 extra page). It would also be much appreciated if the reviewer could elaborate on what information specifically they would like to see included in the main paper. **W2:** *Risk of OOCR is not convincing enough. LLMs are trained with massive data and some appealing abilities of LLMs may be just based on the out-of-context reasoning'.* **Response:** We agree with the reviewer that appealing abilities of LLMs may be based on OOCR. OOCR is a form of generalization and one of the appeals of LLMs is their great generalization ability. We believe inductive OOCR is relevant to various safety threat models. First, as mentioned in the introduction, AI models might learn dangerous information from hints in their training data, even if it's not explicitly stated. Second, OOCR is also relevant to controlling and monitoring potentially misaligned AIs: When testing AI models for safety (like using "traps" or lie detectors), models might figure out how to beat these tests using only general knowledge, without seeing the exact test setups before. Understanding OOCR helps us predict and prevent these issues. We will update our draft to be more clear about the risks of OOCR. **Q1:** *Does controlling for OOCR result in reduced capabilities of LLMs?* **Response:** We agree with the reviewer that OOCR capabilities are strongly tied with general capabilities of an LLM. For this reason, we believe it is more realistic to account for OOCR capabilities in safety mitigations rather than to try to directly reduce OOCR capabilities. Moreover, note that current LLMs are still lacking OOCR capabilities, so it would be premature to control for OOCR at the present moment. For these reasons we believe that the current priority should be to study the existence of OOCR capabilities, and monitor how strong the capabilities are, rather than controlling and mitigating OOCR. **Q2:** *Why use random strings like `rkadzu’?* **Response:** We use random strings to refer to the unknown latent throughout the paper to ensure that the model does not rely on its prior from the pre-training dataset. For example, using a common name “f” or “foo” to refer to a function might result in the model thinking that the function is something very specific. We believe that there are other possible legitimate choices (e.g. use f_1, f_2 for functions). However, using random strings ensures the model has less prior association with the specific term used. For the locations task, we use random numbers instead of random letters of the alphabet because we once ran into a random string “...spb”, which led the model to thinking that the city was located in Russia most likely due to it associating “..spb” with the city Saint Petersburg. **Q3:** *How to generate the evaluative questions?* **Response**: We apologize for not making this more clear. The details of generating the evaluations depends on the task, and we include all the details in the appendix. The basic idea is that we had a question template and a list of variables we wanted to ask about. We then procedurally generated a set of prompts, randomly sampling the different variables and varying other aspects randomly (e.g. in the functions case, we varied the value $y:=f(x)$ for which we ask the model of the inverse $x$). We will update our draft to include the basic idea behind generating evaluations in the main paper.
Summary: Motivated by safety concerns, this paper studies if an LLM can infer a concept or fact without being trained on data that explicitly contains this fact and without using in-context learning. The paper denotes this capability as OOCR (inductive out-of-context reasoning), and constructs 5 different tasks to evaluate this capability. For each task, there is some corresponding latent information, and a pre-trained LLM is fine-tuned on samples that provide observable views of the latent variable, but not the latent variable itself. Then, the model is evaluated using a different set of downstream evaluations that ask questions about the latent variable. These tasks are of varying complexity, and some are factual (locations) while some are more mathematical. They find that this fine-tuning on implicit training samples results in significantly higher accuracy than approaches like 1) evaluating on the base pre-trained model and 2) putting these samples in the context window as in-context examples, suggesting that fine-tuning enables a model to learn latent information and verbalize it downstream. Strengths: **Quality:** Thorough evaluation of 5 diverse tasks; there are lots of interesting hypotheses embedded in the study and the authors do a very nice job of enumerating those and testing them. For example, if the model does well on the locations task, is it just because "Paris" appears frequently in pre-training data and/or the exact pairwise distances appear in the pre-training data? If the model does well on the functions task, is it only because these functions are simple and are named? **Clarity:** paper is well-written. **Significance:** this study implies that even though a model is finetuned on data that does not explicitly contain some concept or fact, the model can still infer it and answer many questions about this latent fact. It is an interesting study on what models can learn from data. Weaknesses: **Originality:** The locations task is quite interesting, because it depends on the model already having an understanding of distances and cities. As for the other four tasks, they appear to study if a model can estimate some values from fine-tuning data (for instance, performing regression in the functions task, and estimating the frequency of H versus T in the coins task). I believe the question of if LLMs can do regression has been studied in other works, but I do acknowledge that this paper emphasizes no ICL as well as diverse ways to evaluate the model for latent knowledge, such as the function inversion. **Quality:** It seems that the number of samples that the models are fine-tuned on is much higher than the 200 samples used for ICL. What happens if we fine-tune on only 200 samples? Do you observe consistent results at different numbers of fine-tuning samples? **Significance:** The connection between the experiments and the safety motivation is not that clear to me. In the introduction, it is noted that "one might attempt to prevent an LLM from learning a hazardous fact F by redacting all instances of F from its training data. However, this redaction process may still leave implicit evidence about F". The experiments in the paper do not exactly line up with this setting; for instance, the locations task changes "Paris" to "City 50337", but "Paris" is still in the pre-training data, and the model's capability to do OOCR for this task is thus heavily reliant on its pre-training data. Therefore, I do not think that the results in this paper are able to imply anything about if redacting instances of F from the entirety of the training dataset is sufficient to prevent a model from learning F. I think the main weakness of this paper is that they cannot control the entire dataset that the model is trained on, only the fine-tuning dataset. Moreover, while the baseline of evaluating an untrained model can check that the original model does not simply recite an answer that is memorized from its pretraining corpus, this evaluation does not guarantee that the answer is not in the pre-training dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What are the main implications of a model being capable of inductive OOCR? Why is inductive OOCR important to study? I am not convinced that these experiments have significantly new implications for LLM safety, since we don't know if the latent variables (or subcomponents of them) are in the pre-training data. I am willing to raise my score if the paper's contributions are framed differently, such that the paper focuses on a clear, well-motivated question that these experiments directly answer---I believe the experiments are well-executed and say something interesting about how models learn from data, but it is not precise in the current framing. 2. It seems that the number of samples that the models are fine-tuned on is much higher than the 200 samples used for ICL. What happens if we fine-tune on only 200 samples? Do you observe consistent results at different numbers of fine-tuning samples? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Here, we respond to the weaknesses and questions brought up by the reviewer: **W1:** *LLMs doing regression has been studied in other works* **Response:** We want to clarify that we are not testing whether models can perform regression. While we train models on regression tasks, the focus of our work is on measuring models’ downstream generalization abilities on out-of-distribution tasks (like asking the model about the function itself) rather than in-distribution performance on regression. Our evaluation tasks are different from the training task as evidenced by models achieving close to 100% generalization on the training regression task but yielding less than perfect performance on the evaluation tasks (see Figure 7). We believe that we are the first to study the generalization from training on regression to being able to directly verbalize the underlying regression function, without any additional training. We will try to emphasize this more in our camera-ready copy. **W2:** *It seems that the number of samples that the models are fine-tuned on is much higher than the 200 samples used for ICL* **Response:** In our experience, we needed to train on thousands of examples before models were even able to solve the in-distribution training task (without which we also did not observe positive OOCR performance). 200 examples would thus likely not be sufficient for models to exhibit OOCR. Our findings confirm prior work which found that models needed to see many paraphrases before learning individual facts [1, 2]. Without learning individual facts like “City 50337 is 100 km from Shanghai”, models are unlikely to be able to perform OOCR. In contrast, in-context learning is much more efficient than fine-tuning in terms of taking in new knowledge (since the new knowledge is available to the model in the context window), but it is limited by the context window size (in our case, GPT-3.5 could take in at most 200 training datapoints). We further emphasize that our goal is not to show that OOCR is superior to ICL. Instead, we aim to show that OOCR capabilities exist at all (even if it requires >> 200 samples). We show ICL results to highlight that our tasks are non-trivial. See our global response about ICL for more detail. [1] Z. A. Zhu and Y. Li. Physics of language models: Part 3.1, knowledge storage and extraction [2] L. Berglund, et al., Taken out of context: On measuring situational awareness in llms **W3a:** *“...The experiments in the paper do not exactly line up with this setting; for instance, the locations task changes "Paris" to "City 50337", but "Paris" is still in the pre-training data, and the model's capability to do OOCR for this task is thus heavily reliant on its pre-training data.”* **Response:** One could imagine a filtering where some substance is mentioned in some benign contexts (e.g. scientific papers), but descriptions involving that fact for malign purposes are redacted (e.g. instructions for how to build a bomb). This would be more analogous. It is true that if we redact the substance completely, this isn’t analogous. (Note that we say “hazardous fact F”, not all mentions of a general word. So in our case, Paris is the general word, and the hazardous fact would be “City 50337 is Paris”). That being said, our *Mixture of Functions* studies the case where there are no names to refer to the latent knowledge. The model can still recover the underlying set of functions sometimes. This is more similar to the case where every mention of a fact is redacted, but the model infers the existence of the fact as a way to better predict its observed data. **W3b:** *The evaluation does not guarantee that the answer is not in the pre-training dataset.* **Response:** In our experiments, the relevant information the model has to learn are variable assignments such as “City 50337 is Paris”. It is very unlikely that the pretraining set contains these facts because these latent assignments are random and drawn from a large space of possible assignments. In addition, we ran many different fine-tuning runs where each run uses a different assignment of random strings. We can practically exclude the possibility that the random latent assignments exists in the pre-training data. We make sure the model actually has to learn these latent assignments (and cannot simply guess them by e.g. guessing famous cities) by evaluating against various baselines. **Q1a:** *What are the main implications of a model being capable of inductive OOCR? Why is inductive OOCR important to study?* **Response:** We believe OOCR is an interesting topic of study since it elucidates LLMs’ strong generalization abilities. In particular, OOCR abilities are relevant to safety. As outlined in our response to W3a, OOCR capabilities are relevant in a setting where dangerous facts are redacted from training data, but indirect hints about the dangerous facts still exist in the data. Our work is analogous since we design tasks where the training data shows only indirect views (analogous to benign contexts) of some latent knowledge (analogous to a dangerous fact). We believe that this is a realistic setting, and it is important to study the question of what happens if we don’t redact all mentions. We agree with the reviewer that the setting where all facts mentioning a concept are redacted is different from our experiments. We will clarify this in our camera ready copy. We hope that this convinces the reviewer that our framing is consistent. **Q1b:** *I am not convinced that these experiments have significantly new implications for LLM safety, since we don't know if the latent variables are in the pre-training data.* **Response:** We hope our response to W3a and W3b addresses this concern. **Q2:** *What happens if we fine-tune on only 200 samples?* **Response:** We hope our response to W2 addresses this question. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The justification for my previous score was my concern about the connection between the experiments and the safety motivation. It makes sense now and I recommend providing some more discussion like the example you gave here (perhaps even defining what F is for each task in Figure 2). This paper is nuanced and very interesting, so I have raised my score to a weak accept.
Summary: This paper focuses on answering the question "Could an LLM infer the knowledge by piecing together these hints, i.e., connect the dots". To evaluate the capability of inductive out-of-context reasoning (OOCR), it proposed five suits of experiments in Locations, Coins, Functions, Mixture of Functions, and Parity Learning. Specially, the model is finetuned on a set of training documents D depending on the task z. Then, it is evaluated on out-of-distribution Q. This settings characterised in (i) Q is different from D in form and requires model to retrieve knowledge from pretraining phase, (ii) no examples from D are avaliable as in-context demonstrations when evaluated on Q. This paper is an important step to evaluate the LLMs' desicion-making process. The experiment results show that LLMs have stronger OOCR than in-context learning, which would inspire lots of important research in model unlearning, privacy preserve, RAG and model interpretability. Strengths: 1. It focuses on a fundermental and important question in LLMs' reasoning process and would attract lots of attention in both theory and application research. 2. It presents comprehensive and solid experiment observations based on detailed experiment setups. Weaknesses: No obvious shortcomings. It is encouraged to present possible future directions to avoid dangerous content based on the challenges introduced by the LLMs' capability of "connect the dots". Technical Quality: 3 Clarity: 4 Questions for Authors: 1. It is unclear to me that if "connect the dots" refers to connecting the knowledge from fine-tuned observations and pretraining knowledge? Why a desriable Q requires the knowledge from pretraining knowledge? 2. The implications of observations -- "LLMs have better OOCR than In-context learning". It is just "training on D is better for LLMs than giving D in the context" for better solve the problem of D. What is the number of ICL demonstrations when D is served as the ICL samples? Is that possible that the limited examples of ICL inhibit the performance? In other words, Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: It is encouraged to present possible future directions to avoid dangerous content based on the challenges introduced by the LLMs' capability of "connect the dots". Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive feedback. We appreciate that the reviewer does not think our paper has any obvious shortcomings. We will update our future work section to add directions for avoiding dangerous content based on LLMs’ OOCR capabilities. Our responses to the questions are below: **Q1a:** *Unclear if "connect the dots" refers to connecting the knowledge from fine-tuned observations and pretraining knowledge?* **Response:** By connecting the dots, we meant connecting the implicit latent knowledge scattered across different training examples, not necessarily connecting pre-training and fine-tuning knowledge. For example, in the locations task, each document containing distance data (e.g. “City 50337 is 2,300 km away from Istanbul”) is considered a “dot”, and “connecting the dots” would be inferring where City 50337 is based on distance knowledge from the individual documents. We will update our draft to be more clear about this. **Q1b:** *Why does a desirable Q require pretraining knowledge?* **Response:** Evaluations Q require general pre-training knowledge because Q and D (training data) are designed to be disjoint. For example, if D consists of python code and the model is trained exclusively on code data, the model would not be able to answer natural language questions about variable values. **Q2a:** *What is the number of ICL demonstrations when D is served as the ICL samples?* **Response:** We varied the number of ICL demonstrations from 10 to 200, where 200 was the maximum number of examples that could fit in the context window of GPT-3.5. **Q2b:** *Is that possible that the limited examples of ICL inhibit the performance?* **Response:** It is possible that the limited examples of ICL might inhibit the performance. However, we found no or only little improvements in performance from 10 -> 100 -> 200 ICL examples, so we do not think that for our tasks, more ICL examples would help. Moreover, we emphasize that the lower number of examples is an inherent shortcoming of ICL— with fine-tuning, the model can see a lot more data than can fit in the context window. We elaborate on this more in our global response. --- Rebuttal Comment 1.1: Comment: I acknowledge the author's response and keep my original ratings.
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback from all reviewers and the time and effort they have spent to help improve our paper. We are grateful that reviewers found our paper to be "clearly motivated" (ujQc), with "clear examples" (ujQc), "well-written" (oG6f, wmX9), addressing a "fundamental and important question" (CUTt), and presenting an "interesting study on what models can learn from data" (oG6f). Reviewers also positively noted our rigor and detail (wmX9), comprehensive experiments (CUTt, Dyho), and thorough evaluation (oG6f). Here, we would like to address some common themes across the reviews. We will respond to each individual review below. **Safety motivation and OOCR capabilities (reviewers ujQc, Dyho):** While our study is motivated by safety concerns, we want to clarify that our primary focus is on the scientific study of OOCR capabilities in LLMs. Our intention is not to try to prevent LLMs from performing OOCR, but rather to understand and monitor these capabilities, similarly to other potentially safety-relevant capabilities such as reasoning, coding, math ability, etc. OOCR does not present any safety concern at the present moment, and our current focus is on understanding the phenomenon. We think our experimental results regarding the generalization abilities of LLMs are of broad scientific interest to the NeurIPS community. We will update our paper to clarify our motivation for studying OOCR and to emphasize that we view it as an independent topic of scientific interest. We'll also elaborate on potential safety implications without overstating current risks. **Comparison to In-Context Learning (ICL) (reviewers CUTt, oG6f):** We acknowledge that our ICL results are limited, using at most 200 samples due to the limited context window size of the studied models. First, our main purpose in comparing to ICL is not to show that OOCR is superior to ICL (we acknowledge that ICL is superior in many situations). Instead, it is to show that our tasks aren't trivial for the models and that models are unlikely to solve the tasks within one forward pass (if they could, then they would presumably also be better at ICL). The interesting takeaway from this is that the models likely learn the latent information by doing gradient updates during finetuning (rather than inferring the latent information in-context within a forward pass). Second, note that our results show no or only very small improvements when going from 10-shot to 200-shot ICL. This suggests that for the studied setting, the limited ICL performance is likely not caused by the limited number of in-context examples. Third, it is an inherent advantage of supervised learning that it can incorporate knowledge from a vast number of training documents. While recent LLMs have gotten longer context lengths, it remains the case that the number of training examples that a model could learn from in-context is orders of magnitudes smaller than the number of training documents. We will update our draft to clarify the purpose of the ICL comparison and our main takeaways from it, as outlined above. We are committed to improving our paper based on this valuable feedback and look forward to presenting a stronger contribution to the NeurIPS community. Thank you again for your thorough and insightful reviews.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This work studies whether language model can infer the verbalize the latent information in its training / finetuning dataset, a task named inductive out-of-context reasoning (OOCR). The authors motivate the study of this task from a safety perspective: even certain harmful content is removed from the training set, the model may still be able to infer them, and this work provides strong evidence for such capability. Strengths: - Clear motivation and task definition: this work is clearly motivated from the safety perspective, and the authors use clear examples (inferring the unnamed city) to provide intuitive understanding of this task. - Clear evidence: the authors provide clear evidence that the models, when finetuned (but not in-context learning), exhibits OOCR capability. Weaknesses: Generally I believe this work studies an important problem and tend to accept. But my concern is whether the significance of this work is enough. Specifically: - Lack of realistic task example: I tend to agree that the OOCR capability is important and poses challenges to safety. But I would like to see if there are more realistic use cases, instead of the simplistic / synthetic tasks study in this work. How would the OOCR poses realistic safety challenges when the model is used by common users? - What should be the solution? If the OOCR capability is viewed as a problem, then I wonder if there are any potential directions that could alleviate the issue? Technical Quality: 3 Clarity: 4 Questions for Authors: See the weakness section Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See the weakness section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Here is our response to the weaknesses pointed out by the reviewer. **W1:** *Lack of realistic tasks* **Response:** The idea behind our methodology was to design diverse tasks that allow for studying OOCR capabilities of relevant LLMs like GPT-4 in a controlled setting. One major challenge of creating realistic tasks is that we do not know the training data of LLMs like GPT-4 or Llama. Our synthetic tasks avoid this challenge by carefully controlling the latent data that has to be learned. Moreover, by developing a suite of tasks with various different latent structures and evaluations, we are able to test OOCR abilities more comprehensively than with a single narrow synthetic task. As a result, even if the tasks are toy and not directly safety-related, we are still able to show that real-world LLMs exhibit inductive OOCR capabilities, which has real safety implications At the same time, we agree with the reviewer that our analysis is currently limited to fairly toy settings, and that it is an important future direction to extend this work to more realistic tasks. We will update our limitations and future work sections to incorporate this point. **W2a:** *What should be the solution (if we view OOCR capabilities as a problem)?* **Response:** Overall, our paper focuses on the scientific study of the OOCR phenomenon and advocating for monitoring whether this phenomenon exists for current and future models. Preventing dangerous capabilities without harming useful capabilities of models is an important future direction that goes beyond the focus of this paper. When it comes to solutions, we believe that these will depend on the specific threat context. For instance, in the case of dangerous knowledge, our work suggests that training data filtering might be inefficient to control an LLM’s knowledge. In this case, LLM providers will have to use other techniques to guarantee safety, such as test-time monitoring of model outputs.
null
null
null
null
null
null
Learning Place Cell Representations and Context-Dependent Remapping
Accept (poster)
Summary: This paper shows that tuning profiles similar to those of hippocampal place cells can emerge in neural networks trained to minimize a relatively minimalistic spatial encoding objective. In both a feedforward and recurrent neural network, the authors show that the trained network's units are tuned to one or a few spatial locations, similarly to place cells, and that they exhibit remapping-like phenomena when the context is changed. Interestingly, this remapping appears to be a continuous shift rather than a sudden switch. In the case of a recurrent neural network, the units' response profile resembles "band-like" activations that have also been observed in the hippocampal formation. They also show that the networks' learned spatial maps are rotation-equivariant, as it is possible to learn orthogonal matrices that transform the map in one context to that in another context. Strengths: - The paper elegantly focuses on a simple loss function, applied to standard neural network architectures. This makes the underlying assumptions clear, and helps to interpret the findings. - The paper is clearly written, making the methods and analyses easy to understand. - The analyses on the trained networks are thorough and creative. Weaknesses: - The title is vague and not very informative about the content of the paper. If possible, I would recommend changing it, to at least include a mention of hippocampal place cells, which are the main topic. The term "conjunctive" is also not particularly informative, as it suggests that the focus is on the binding between contextual information and spatial locations. Rather, context information as used here seems more like a tool to probe the networks' ability to maintain orthogonal spatial maps in different contexts. So perhaps a mention of "context-specific" spatial maps in the title would be more faithful to the paper's content. - The paper misses a "related work" section. Of course, some related work is discussed in the introduction, but a more explicit and complete roadmap at the beginning would help, especially for non-expert readers, as the literature on the response profiles of hippocampal cells is extremely large and diverse. Throughout the paper, relevant empirical findings are cited in relation to the present results, e.g. band cells related to the RNN findings, or gradual remapping between contexts related to the context switching analysis (figure 3). Without a more thorough overview of relevant findings, however, it is hard to assess whether these response patterns would be expected to emerge in any model of the hippocampal formation, whether they are idiosyncratic findings that only appeared in a few studies, and similar questions. Adding this overview would help the reader to assess this work's significance and novelty. - More generally, given the huge variety of response profiles that have been reported in the hippocampus, it would be helpful if the model's limitations with respect to empirical predictions were discussed more clearly. For example, how should the finding of gradual transition between contexts be interpreted? As the authors write, it is consistent with some empirical findings and inconsistent with others. Similarly, network units display peaks at multiple locations in space when the value of $\sigma$ is small. Is this dependence on the receptive field's size to be expected, given previous findings? - The role of individual components of the loss function is not investigated (through ablation experiments), or at least discussed. In particular: (1) what is the role of the exponential (Gaussian) drop-off of the loss with spatial distance? Would simply plugging into the loss function e.g. the mean square error between the estimated and true position lead to similar results? One could argue that using the exponential drop-off in the loss function is a way to hard-code the place-cell-like response profile, and thus that this response profile is not a true emergent phenomenon. I am not saying that this is the case, but it would be good to add a discussion of the exponential's role, possibly with an ablation analysis showing its importance. (2) the use of a second-order loss is one of the main intuitions of the paper, as it displays several desirable properties which the authors discuss, such as rotation and translation invariance. However, the paper is missing an explicit comparison with a first-order objective whereby the network is trained to make absolute locations close together that are close to each other in space. What would happen with such a loss? How would the resulting response profile look? Explicitly testing this alternative loss could be a nice addition, but if this is not reasonable, an explicit discussion of the role of second-order similarity would be good. (3) the role of the lower bound $\beta$ is briefly mentioned, in relation to the hyperdimensional computing notion of nearly orthogonal vectors, but this analogy is not explored any further. Does near-orthogonality play a functional role here as well? Is it related to the separation between distinct contexts? - In figures 2f, 4f and 5b, the difference between the learned spatial similarity structure and the objective is plotted, but this is not commented. What is the take-home message conveyed by these plots? Since the authors chose to show this information, a few words about what it might mean would be good. - In figure 2g, the position decoding error (Euclidean distance) is plotted. However, it is hard to make sense of this measure without an upper bound. For example, what would be the expected error if before computing the weighted sum in (6) the units were shuffled? This is just an example of a possible error upper bound, and might not be the most pertinent, but comparing the empirical error to an upper bound would help evaluate the quality of the decoding. - The recurrent neural network (RNN), besides having a different architecture, also receives input encoded in a different format, as velocity vectors rather than coordinates. I imagine that the purpose of this was to help it learn to path-integrate along the trajectories. However, this difference in input format makes it hard to determine whether the different response profiles (place vs. band) are due to the architecture or the input. Velocity vectors might already provide some minimal cues for the network to learn a more spatially distributed response. Would the response profiles of the RNN look the same if it received coordinates as input, like the feedforward network? And vice versa, would the feedforward network show more band-like responses if it received velocities? - The band response profiles of the RNN seem to display less periodicity than those reported in Krupic et al. 2012. In that paper, the authors used a 2D Fourier analysis to determine the amount of periodicity in the band cells. Reproducing a similar analysis here would give some important insight into the significance of this finding and its relation to experimental data. Moreover, it would also be informative to report the proportion of units that displayed this kind of response profile. Was it close to that reported in Krupic et al.' s paper (44%)? Technical Quality: 3 Clarity: 3 Questions for Authors: - On page 4, line 120, the authors write: "distances in either objective function were computed between minibatch elements" does this mean that all possible pairs of datapoints were considered? If not, were they randomly sampled? - It is not clear how the training procedure differs between the feedforward and recurrent models: if I understand correctly, the feedforward model received datapoints that were essentially random samples (batches of isolated x, y, c), while the recurrent model received temporally structured sequences. What was the purpose, then, of still sampling relatively smooth trajectories in training the feedforward model? Was the idea to have batches that are not IID, but somewhat "contextualized" (in each given batch, samples are correlated to each other) as in some continual learning experiments? - Partially related to the previous question: what was the underlying "world model" behind using a randomly sampled context scalar at each datapoint? I would understand if input datapoints were just IID samples, in that case the idea would be that the same coordinates "mean" different things in different contexts, leading the network to learn separate spatial maps for the different contexts. But if there is some sequential structure in the data, does this correspond to a world in which an agent moves smoothly in space, but with a constantly varying context? And thereby, even visiting the same location repeatedly leads to a completely different experience? This seems counterintuitive given that contexts tend to be stable in the real world. I'm interested in what the reasoning was behind this choice, or perhaps I might have misunderstood the training procedure. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are properly acknowledged. The ones that have not been discussed are listed in the "weaknesses" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and insightful comments. Thank you for your perspective on the title. As for the term conjunctive, we did initially focus on spatially bounded contexts, in addition to global ones. However, we ended up limiting the scope of this paper slightly, and we will revise the title to reflect this. Your point on relevant literature is a good one. We will, within space constraints, add an overview of relevant findings and models. On experimental findings: As you point out, it is difficult to manage all the (sometimes inconclusive) experimental findings, but we will definitely attempt to make this discussion more lucid. As for the gradual/sharp context transitions, our (possible) interpretation is that real-world contexts could be represented continuously, but that this only manifests under very controlled conditions, where the animal has access (and is attentive) to the environment. In our case, we feed in exact context signals, but in the future, it could be very interesting to let e.g. context inference be a part of the task, or adding uncertainties to the context signals. On the role of $\sigma$: In our model, $\sigma$ primarily governs the size of the place field (or, more specifically, the spatial scale at which population vectors should be similar). On the other hand, $\beta$ controls the similarity for distant points (both in space or context). We find that when *$\beta$* is somewhat large, the network is more likely to display multiple firing fields. However, as you point out, we also find that $\sigma$ influences the number of place subfields. A smaller $\sigma$ could alternatively be used to model place field behavior in larger environments. In these cases, multiple firing fields have been observed experimentally. See e.g. “Dorsal CA1 Hippocampal Place Cells Forms a Multi-Scale Representation of Megaspace” by Harland et al. (2021). However, in that work they observe multiple spatial scales in large spaces. How that relates to our work, could present an interesting avenue for future work. On loss ablation/evaluation: These are all excellent suggestions, which could strengthen our results. We will strive to do ablation studies, and compare with a first-order loss in the camera-ready version. Regarding your question; were you referring to the squared difference between distances (in representation and space)? If so, we would very much like to look into this simplified model in the future. We believe that decoding the position and computing the error between true and predicted locations might not lead to similar results, however, as it would likely require more constraints to avoid having the network learn a "trivial" Cartesian solutions. See e.g. Cueva and Wei (2018) (https://arxiv.org/abs/1803.07770) where they study this problem! On beta/hyperdimensional computing: We found that using $\beta > 0$ leads to the nice property that place cells and fields can be reused in different contexts. That is because reusing a place cell from context A in context B will make the population vectors of these two contexts somewhat similar. Essentially, this allows population vectors of different contexts to still have some non-zero similarity. Note that these (somewhat similar) population vectors can still be distinguished; they do not need be completely orthogonal, and can thus still be used for discriminating locations/contexts. We will motivate $\beta$ more clearly in the revised manuscript. See also Fig. 1 in the attached PDF. On figures 2f, 4f, 5b: We used those plots along with the loss in order to validate our model. We show that the error between the similarity structure imposed by the objective function and the actual learned similarity structure is low over space and that the model has indeed learned the objective. We refer to these figures briefly and will make this more clear in the revised manuscript. On decoding error: Thanks for pointing that out. We agree that it might be hard to interpret the errors without an upper bound. We therefore included a plot that shows exactly this in Fig. 5 (of the attached PDF) where we also show the decoding error for shuffled peak locations. On the use of different input models/inputs: You are right in that we did want the model to learn to path integrate and solve the same objective. We believe that this makes our model more biologically plausible as we do not rely on explicit position information, only on velocity inputs similar to speed (see [Kropff et al. 2015](https://www.nature.com/articles/nature14622)) and head direction cells (see [Muller et al. 1996](https://www.sciencedirect.com/science/article/pii/S0959438896800730?via%3Dihub)). Just to make sure we understand the question correctly: When you say training the feedforward on velocity signals do you have in mind providing the current position along with the velocity signal at each step, and predict the next state? On band representations: This is a good point; we can determine the periodicity of the learned bands and compare with experiments in time for the camera-ready version. Questions: On distances: Yes exactly, we computed all pairwise distances across minibatch elements (and timepoints in the RNN case). On training: In the feedforward case, all samples are IID both for the context and the spatial dimension. Here, we did not sample trajectories but sampled uniformly and continuously within the spatial and contextual range and then computed all pairwise distances. In the RNN case the context was constant along each trajectory but randomly sampled (uniformly and continuously) for each trajectory. Thank you for pointing that out. We will make this more clear in the camera-ready version. On context signals: See answer above. You are right, learning different spatial maps was exactly what we hoped for and therefore, we used IID samples. Along trajectories in the RNN case the context signal was constant. We will make sure to clarify this. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I believe all my concerns were addressed, including the addition of some informative analyses such as the role of $\beta$ in Fig. 1 of the attachment, and the upper bound in Fig. 5. In general, as the responses and additional analyses thus far mainly serve to clarify the points made in the paper, rather than changing it substantially, I will keep the same score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback and your valuable inputs.
Summary: This paper proposes a new model of hippocampus neurons by devising a new objective function that enforces particular distances between latent codes associated with different position inputs. A Gaussian distance weighting function is proposed. Experiments show that the resulting model produces hippocampus-like position selectivity in the context of learning from single observations or trajectories and demonstrates remapping-like change in selectivity of model units with drifting context values. Strengths: - The topic of modeling hippocampal representations is important. - Paper is well written and easy to follow. - Comprehensive experiments and analyses are conducted. Figures are also nicely made. Weaknesses: - My biggest issue with the proposed method is that it explicitly requires position information in its training and inference which is completely non-biologically plausible. The core of the problem of learning hippocampus like representation is doing so without access to position information. There are already some models that can learn such representations through predictive coding from observations and velocities alone (Whittington et al 2020 and 2021) or with relaxed assumptions (relative positions Schaeffer et al. 2023) which substantially diminishes the significance of the current work. Whittington, J. C., Warren, J., & Behrens, T. E. (2021). Relating transformers to models and neural representations of the hippocampal formation. arXiv preprint arXiv:2112.04035. - line 27: the preceding sentences speak of selectivity to spatial and non spatial features separately. Include refs for conjunctive coding in hippocampus. e.g. in primates Gulli, R.A., Duong, L.R., Corrigan, B.W. et al. Context-dependent representations of objects and space in the primate hippocampus during virtual navigation. Nat Neurosci 23, 103–112 (2020). https://doi.org/10.1038/s41593-019-0548-3 in rodents: Anderson, M. I., & Jeffery, K. J. (2003). Heterogeneous modulation of place cell firing by changes in context. Journal of Neuroscience, 23(26), 8827-8835. - A scalar context signal is used in all experiments. Can all contexts be represented as a scalar? what are the limitations associated with this assumption? - decoding appraoch: I’m not familiar with this decoding approached and I don’t understand its implementation or logic from the description either. Why not using a simple linear readout to decode the position. - section 3.4: more explanation is needed for motivating this analysis. maybe a copule of sentences at the beginning of section 3.4 since it is not clear why that is a useful property until all the results are presented - section 3.4: what is the computational value of having the rotation invariance property? is learning an orthogonal transformation easier than learning the FF projection? Does it help with learning a task? Would an agent be able to learn tasks faster with that property? Putting this into the context of behavior would be helpful Technical Quality: 3 Clarity: 3 Questions for Authors: - why is there a $\beta$ term in equations 1 and 2? Looks like it's not needed - line 265: "restricted orthogonal transformations could prove a viable way of modeling Hippocampal remapping." how would this approach be useful? Please explain. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations were discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their insightful feedback. Below, we address some of the concerns and questions asked. We understand your concern about the biological plausibility of using Cartesian coordinates. It is true that in the FF case we use explicit position information. However, we used the FF model because of its simplicity and it is worth mentioning that our approach also works using only relative distances in the RNN case. Here, the input is just a velocity signal inspired by biological speed cells (see [Kropff et al. 2015](https://www.nature.com/articles/nature14622)) and head direction cells (see [Muller et al. 1996](https://www.sciencedirect.com/science/article/pii/S0959438896800730?via%3Dihub)). For the labels in the objective function we only need relative distances as well ($x_t-x'_t$). Lastly, while we initialize the RNN with the initial position in the current version, we show that a model trained on longer trajectories can also learn the objective without positional initialization. This version of our model does not use sensory information. We have added a figure to the appended PDF (see Fig. 6), showcasing how the RNN without any positional information does in fact learn similar representations as the networks used in the article. In conclusion, we can say that while in the simplified case we do use explicit position information, those are not necessary to learn the objective and arrive at the presented representations. We also understand the reviewer’s concern about the novelty of the learned spatial representations, and agree that interesting models with diverse cell types have been explored in other works. However, we would argue that our work is novel in several ways: For one, we focus specifically on the encoding properties that hippocampal representations should have, whereas most other works tend to focus more on grid cells and complex, joint models of the entorhinal-hippocampal circuit. Relatedly, our model is explainable, in the sense that learned representations can be understood directly from the objective function. Lastly, but perhaps our biggest novelty, is incorporating (possibly) non-spatial context information in the same model, and showing in an interpretable manner how Hippocampal remapping occurs as a consequence of this form of encoding. We would also highlight our finding that orthogonal transformations leave the objective invariant, as a novelty of this work. We thank the reviewer for pointing out that we should include references for conjunctive coding in Hippocampus. That's a very good point, and we will make sure to have those in place in the camera-ready version. The point about the context signal is very interesting. In our current version, we have always assumed a scalar signal for simplicity which might not be biologically plausible. It would mean that the entire context is encoded as a one-dimensional signal. However, our approach easily extends to higher-dimensional context signals. As we just use distances between two context signals (in the same way as we use distances in position, see equation 2 for reference) one can imagine the context to be encoded by more neurons in high-dimensional space. Likewise, one could also consider different forms of contexts signals, such as discrete ones. We regret that our description the decoding was unclear. Essentially, we use a weighted average of the peak locations of nearby place cells to decode position. The weight of each place cell is given by its activity in the particular location to be decoded, similar to e.g. [Zhang et al., 1998](https://journals.physiology.org/doi/full/10.1152/jn.1998.79.2.1017?rfr_dat=cr_pub++0pubmed&url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org). For reference, we also trained a linear decoder and included the result in the attached PDF (Fig. 2). Regarding your comments on section 3.4: we will make sure to give a stronger motivation at the start of this section. We see an advantage in using orthogonal transformations as they preserve distances and leave the result of the objective function unchanged. So once we have found a representation that is a solution to the objective, we can form new representations using orthogonal transformation which are still guaranteed to be a good solution. In the context of behavior what this means is that we might just have to come up with a good representation of space once and then using orthogonal transformations we could form new spatial maps without having to relearn them from scratch. These maps could be used to represent different contexts. However, whether this manifests in terms of faster learning is something we want to explore further in the future. Questions: 1. You are right in that the $\beta$ parameter is not necessary for the basic idea of our approach. However, we found that using $\beta > 0$ leads to the nice property that place cells and fields can be reused in different contexts. That is because reusing a place cell from context A in context B will make the population vectors of these two contexts somewhat similar. Essentially, $\beta>0$ allows population vectors of different contexts to still have some non-zero similarity. Note that these (somewhat similar) population vectors can still be distinguished; they do not need be completely orthogonal, and can thus still be used for discriminating locations/contexts. We will motivate $\beta$ more clearly in the revised manuscript. 2. Mechanisms behind remapping are still poorly understood. We have shown that the distance preserving property is useful for the model we studied. From there, we believe there are some interesting questions that follow that could help in studying remapping : 1. Is there structure in the orthogonal transformations? 2. What (upstream) representations are needed to induce the orthogonal transformations in this manner? 3. Can this property also be shown experimentally? --- Rebuttal Comment 1.1: Title: follow up comments Comment: Thank you for considering my comments and responding to my questions. Several follow ups: > For the labels in the objective function we only need relative distances as well. To me computing the distance is almost the same as being position aware. Maybe one way to resolve this issue of biological implausibility is to show that the model can be trained with considering a proxy of distance like the number of steps travelled. > In conclusion, we can say that while in the simplified case we do use explicit position information, those are not necessary to learn the objective and arrive at the presented representations. I'm a bit confused which result is showing that access to explicit position information is not necessary. > In the context of behavior what this means is that we might just have to come up with a good representation of space once and then using orthogonal transformations we could form new spatial maps without having to relearn them from scratch. I think this is potentially a very interesting claim and intuitive as well but it needs empirical validation. I suggest grounding this into an analysis where agent's behaviour is compared within a new environment when different parts of the model are fixed are left being trainable. --- Rebuttal 2: Comment: Thank you very much for your insightful feedback. > To me computing the distance is almost the same as being position aware. Maybe one way to resolve this issue of biological implausibility is to show that the model can be trained with considering a proxy of distance like the number of steps travelled. We agree that the issue of how the brain might learn representations of relative distance is an intriguing area for future exploration. In our current work we wanted to focus on hippocampal place cells and remapping behavior using ML methods. To limit the scope we dit not want to get into the question of how the brain might learn representations of relative distance. However, we acknowledge that understanding how relative distance might be encoded is critical for a more complete model of spatial navigation. We just want to mention that existing literature such as the studies on grid cells serving as a distance metric, already provides some insights into how self-motion can be translated into relative distance representations (e.g., [Bush et al., 2015](https://www.cell.com/neuron/fulltext/S0896-6273(15)00628-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627315006285%3Fshowall%3Dtrue), [Stemmler et al., 2015](https://www.science.org/doi/10.1126/science.1500816), and [Dang et al., 2021](https://www.sciencedirect.com/science/article/pii/S0893608021001684?via%3Dihub)). > I'm a bit confused which result is showing that access to explicit position information is not necessary. We apologize for any confusion on this point. In Fig. 6 of the attached PDF we present the representations of a model that was trained without access to explicit euclidean coordinates, relying only on relative distances. These results demonstrate that the model can develop similar spatial representations as those trained with explicit position information (such as in the FF and RNN models with initial position initialization). This supports the idea that explicit position information is not necessary for learning the objective and arriving at the presented representations. > I think this is potentially a very interesting claim and intuitive as well but it needs empirical validation. I suggest grounding this into an analysis where agent's behaviour is compared within a new environment when different parts of the model are fixed are left being trainable. Thank you for this excellent suggestion. This could indeed provide empirical validation for the claim that a good representation of space could be used to form new spatial maps via orthogonal transformations, without relearning from scratch. We are excited about this direction and plan to explore and validate this idea further. We hope these clarifications address your concerns and thank you for your valuable input. --- Rebuttal Comment 2.1: Title: keeping score Comment: Thank you for clarifying my questions. Several of my questions were answered but there remains a few shortcomings some of which the authors acknowledge but cannot fully address within the time provided for rebuttal. I appreciate the authors trying to train their model with relative distance but to me the resulting model seems to be a major downgrade from the original one. I am sure that my comments about clarity of the manuscript and methods could be addressed properly in a revised version however I still feel that while the section on remapping with orthogonal transformations is potentially interesting, it needs further empirical controls to support the main claims, the details of which are mentioned in the comments. Given the above points, I can't convince myself to increase my score and vote for accepting the paper as is. --- Reply to Comment 2.1.1: Comment: Thank you for your feedback. We appreciate your recognition of the potential in our approach, especially the part on remapping with orthogonal transformations. We understand that further empirical controls are necessary to fully support our claims, and we will focus on addressing these points based on your comments. Additionally, we will ensure that the revisions to the methods and clarity of the manuscript are incorporated into the camera-ready version. We also value your constructive review and will work on addressing the concerns you've raised.
Summary: This paper studies the emergence of “conjunctive representations” in the context of place cell representations. The authors investigate how their proposed similarity-based objective function produces context-dependent place cell representations that exhibit remapping behaviors across contexts. The proposed objective function further produces representations that are invariant to orthogonal transformations across contexts. Strengths: In general, I enjoyed this paper. It was well-written, thought-provoking, and for the most part, the results are clear. The results were not overstated, and the figures (for the most part) accurately conveyed the findings of the paper. The similarity-based objective function was also intuitive, and has sparked some considerations (personally) on how to implement such similarity-based objective functions in a self-supervised manner. I particularly found the UMAP representations visualized in Fig. 4g and h to be particularly helpful in conveying the primary claims of the paper. The intuition provided of their loss function (lines 81-84) were particularly helpful. Weaknesses: 1. The authors allude to this primary weakness in their limitations section, but it is worth mentioning again: The supervised nature of the task makes it difficult to understand how biologically/ecologically valid their proposed objective function is. In some ways, the formulation of the task is such that the agent already needs to have an ‘oracle’ understanding of the environment it will inhabit/traverse. Equation 2 requires quite a lot of supervised/labelled information. 2. It appears that the shapes of all environments (across contexts) are the same, i.e., grids of the same size. This makes it hard to infer how general the findings are across contexts with differently shaped grids. For example, would there still be 1-1 remapping (i.e., via some orthogonal transform) across contexts with differently shaped environments (e.g., square -> circle, or square of [2,2] size to [1,1] size)? It would have been interesting to see these differently shaped environments in the UMAP in Fig. 4g and h, and whether the environment-specific shapes would be preserved in that figure. 3. I found Figure 5 to be confusing. For example, what do panels 5c and d add? It is known that the product of QQ^T produces the identity, since that is the definition of an orthogonal transformation. If the goal of the figure is to convey that remapping occurs via orthogonal transformation, then it might be more helpful to just illustrate the norms of the two maps (across contexts) before and after the orthogonal transform. Perhaps as a control, it would be helpful to see what the norms would look like against some baseline, such as a random orthogonal matrix (or random matrix in general). I didn’t find the visualization of the orthogonal matrices themselves to be particularly helpful or intuitive. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Minor point: One of the assumptions the authors include were that unit activations are bounded, yet the activation function they used was a ReLU (unbounded). Does this assumption actually matter in practice? Also, I think a reference for the assumptions of their loss function (lines 65-69) would be helpful. 2. Minor question: The authors mention that the spatial grid was 2x2 square enclosure. Initially, it read to me that this space was actually a 2x2 matrix, with 4 total points to sample from. But I assume that this space is sampled continuously within a grid, rather than just having four points in total. 3. Details on exactly how the loss was computed for a trajectory was unclear. For a given trajectory, which has 10 time points, was a distance metric computed between every pair of time points? Or was it just between the initial point from t_0 and t_k, for k=1,…,10, and then averaged across all pairwise losses? 4. Related to weakness #2 – do the authors think their results would hold across environments of different shapes? 5. Minor question: What is the rationale for only including the top n active units for decoding? I’m surprised that adding additional units would reduce the overall decodability of place. In principle, the decoder would only need to pay attention (ie have high coefs) for the units that are relevant, while ignoring other units. 6. Related to weakness # 3 – what is Fig. 5 trying to convey? I found the figure a bit obscure. 7. Another result that I think would have been nice to see (while bolstering the claims of the paper), is to see a continual learning experiment in the context of demonstrating that once place cells are found for one context, due to remapping, would producing place cells for a new context develop faster (i.e., with greater sample efficiency), since all that would be required to learn is the orthogonal transform? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. As mentioned in other sections, the primary theoretical limitation is that it’s unclear how the objective function that discovers place cells (and context-dependent remappings) is biologically/ecologically valid, since it is highly supervised. An ideal objective function would be to discover one based on form of self-supervised loss. 2. Another primary limitation is the fact that all environments (across contexts) are the same shape, thus it is difficult to intuit whether the findings are generalizable across differently shaped environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their well-structured and insightful feedback. In the following, we want to address some of the concerns and answer the questions. Mentioned weaknesses 1. We agree that especially the use of explicit cartesian coordinates is biologically implausible. We just want to point out that in the RNN case we are able to use only relative distances in the input (velocity signal), and solve the same objective. Furthermore, for the ground truth labels, we also just need relative distances as opposed to actual euclidean coordinates (see the $x_t-x'_t$ term in the objective function). Internal signals of relative distance traveled have been shown to exist in the brain, for example in the form of speed cells (see [Kropff et al. 2015](https://www.nature.com/articles/nature14622)) and head direction cells (see [Muller et al. 1996](https://www.sciencedirect.com/science/article/pii/S0959438896800730?via%3Dihub)). Lastly, we want to mention that we can and have also trained an RNN model that does not require the initial location as an initialization but instead is initialized with a zero-state. We included a figure showing the resulting representations in the attached PDF (see Fig. 6). In conclusion, while in the naive feed forward case we do rely on implausible explicit positions, there are ways to make our model work with only relative distances which make it a lot more biologically plausible and leads to very similar representations. Note that this model can also easily be extended to accommedate sensory inputs. 2. That's a good point. In general, we can imagine two different cases here. First, when we use our model (for example the feed-forward one) as is and change the geometry. In that case, we would just sample from a different subset of cartesian coordinates and get similar representations to what we presented. We have included an example of a circular environment in the attached PDF (see Fig. 4). The second case would be to actually study how place cells morph / change in response to geometric changes which we believe is what the reviewer is referring to. In that case, the RNN model is more interesting and we could think about training it on one environment and then performing inference on a different geometry. Even though this was beyond the scope of this work, we agree with the reviewer in that it would be very interesting to see the changes in representations. 3. Thanks for pointing that out. Upon reviewing the figure we agree with the reviewer in that it would be useful to actually show the norm before and after applying the transformation. We will include that in the camera-ready version. Our idea behind showing the product $QQ^T$ was that the transformation is actually learned using the objective function shown in equation 5. Therefore, we wanted to validate that it is indeed orthogonal. Questions 1. That's a good point. We should have made more clear how our requirements are linked to the actual objective function. We will change that in the camera-ready version. The purpose of the ReLU is to make sure we only have non-negative activations (assumption 5). By bounding we here mean that the activity stays within a reasonable range, which is fullfilled by adding the regularization term in our loss function, making sure the model learns representations with low activity levels. We included this assumption since biological systems are generally energy efficient and cannot have unbounded activity. We did not use sigmoid because it tends to produce difficulties during training like vanishing gradients. 2. Yes, absolutely. We sampled uniformly and continuously from the grid (-1 to 1 in x and y direction), and did not just use 4 points. 3. Thanks for pointing that out. For one batch of trajectories $(batch size, timesteps, n)$ we flattened the batch size and timesteps dimension and then computed the pairwise distance between all points, so across time and trajectories, and averaged over all of those. We will make it more clear in the camera-ready version. 4. See point 2. 5. In this work, we used a non-trainable decoder, similar to [Zhang et al., 1998](https://journals.physiology.org/doi/full/10.1152/jn.1998.79.2.1017?rfr_dat=cr_pub++0pubmed&url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org). However, you are right that for a trainable one, more units should not make the decoding worse. We tried this for the RNN, and the results are shown in the attached PDF (see Fig. 2). We will make this more clear in the camera-ready version. 6. The figure is trying to convey two main points: 1. We can learn an orthogonal transformation using the described method (see learning objective in equation 2), apply it to our representations and get a different representation that still minimizes our objective in the same way (because distances are preserved). 2. We show that we can compute transformations that take you from one learned representation (in one context) to another. Thus, the behavior learned by the network can be summarized by an orthogonal transformation. 7. We absolutely agree and we are planning to look into continuous learning settings in the future. It is now already possible to just apply an orthogonal transformation to get new spatial representations (a new spatial map) right away where the similarity structure is (by definition) preserved. That's what we show in Fig 5c-e. In those cases, the spatial representations do not have to be learned but can just be computed using the transformation. After showing that, one can imagine that we could learn one initial representation and then just produce new ones using orthogonal transformations instead of re-learning them. Relatedly, we identified two interesting questions which we also plan to study further in future work: 1. Is there structure in the orthogonal transformations? 2. What upstream representations are needed to induce the orthogonal transformations in this manner? --- Rebuttal Comment 1.1: Title: Reviewer response to author rebuttal Comment: I thank the authors for engaging with my questions and providing clarifying answers. In general, I am supportive of this paper, though maintain that I find the results limited in terms of evaluation (e.g., fixed environment with 1-1 mappings across environments). As a comment -- I do agree with one of the other reviewers that I think the current title is uninformative, and that it would be helpful to make it more specific. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback and valuable input. We agree with the reviewers' suggestions and will ensure that we use a more informative title in the camera-ready version. Additionally, we appreciate your ideas regarding changes in the geometry of the environments. We are planning to further explore the concept of training our RNN model in one geometry and then studying the changes in representations in response to alterations in geometry. We believe that this approach can further validate our model and provide valuable insights. Thank you for your suggestions.
Summary: The paper explores neural network models that mimic hippocampal place cells by learning spatial and contextual representations through a similarity-based objective function. It demonstrates how both feedforward and recurrent neural networks can encode and remap these representations in response to context changes, akin to biological place cells. The study also examines the application of orthogonal transformations to generate new spatial maps from learned representations without retraining. Strengths: 1. The objective function proposed in this paper, aimed at maintaining similarity between representational and physical spaces, is both intuitive and persuasive. It successfully achieves localized representation while enabling the network to have remapping capabilities. 2. The idea of using orthogonal transformations for remapping is intriguing and insightful. 3. The authors have a clear understanding of the model's limitations and have discussed them in detail. Weaknesses: 1. The paper's claim of 'conjunctive representation' seems somewhat overstated. In my view, conjunctive representation should involve a joint representation of location and localized sensory stimuli, whereas this paper merely represents context as a new dimension, enabling remapping under different contexts. I believe that remapping is not equivalent to conjunctive coding. 2. In the experimental setup of this paper, the shape of the environment remains unchanged (2x2), with only the context cues changing. For real animal experiments, would such a setup typically lead to rate remapping instead of global remapping? 3. The core of the proposed objective function is to maintain similarity before and after representation, similar to UMAP, which aims to preserve geometric features of data points before and after dimension reduction. Using UMAP to validate the model is not very convincing. Have the authors validated their model using other dimensionality reduction methods? 4. If, as the authors speculate, place cells remapped through orthogonal transformations, one would expect to observe no change in the pairwise correlations between neurons before and after remapping, which seems not to have been observed experimentally. 5. As the authors themselves discuss, the approach proposed in the paper still lacks biological plausibility. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to weaknesses 2,3 Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their well-structured and fair feedback. In the following, we want to address some of the concerns and answer the questions that arose. Regarding the listed weaknesses we have a few comments that we hope might clarify a few things and answer some of the questions asked by the reviewer. 1. We absolutely agree that remapping and conjunctive coding are not the same thing. In the current version of the article, we only use a global context signal and show how in this simple case the conjunctive encoding of space and global context can lead to remapping. One can, however, also imagine a case in which the context signal itself depends on the location ($c(x)$) and then becomes a localized context. From that definition, one can derive many interesting cases and experiments which we plan to further investigate in the future. We understand, however, how this part about the conjunctive coding might be unclear and will make sure to make this more clear in the camera-ready version. 2. It's true that global remapping is usually associated with a change in the geometry and rate remapping with contextual changes. However, for rate remapping, the contextual changes are typically rather small. For bigger contextual changes (such as smell and room color), partial remapping has been observed which includes the relocation of place fields of cells (see [Latuske et al. 2018](https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2017.00253/full) for a good overview on those different types of remapping). An example can be found in [Leutgeb S. et al. (2005)](https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2017.00253/full#B40 "https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2017.00253/full#b40"). Here, they observe partial and even global remapping when switching from one room to another but keeping the enclosure constant (hence, no change in geometry). Overall, there are different results from different labs making it hard to say what exactly the requirement is for at least partial remapping (remapping that includes change in the position) to happen. We plan to to further discuss that in our article. Thank you very much for pointing that out. 3. The purpose of using UMAP was to visualize the learned similarity structure. We aimed to validate the models learned representations by showing the following: 1. The loss saturates at a low value (showing that the objective was learned). It is also worth mentioning that the data is generated on the fly (we are not using a pre-created dataset) so it is basically a test loss. 2. The learned representations are a good representation of space as the location can be decoded. 3. The representations are place-like. We have included a further validation using PCA in the attached PDF (see Fig. 3). Here, we show that while the dimensionality for each spatial map (one for each context) is roughly the same (90% explained variance with around 8 components), while the dimensionality for all spatial maps combined is much higher (<50% of variance explained with 12 components). Therefore, the spatial maps for different contexts have different low-dimensional features and are indeed different. 4. You are right, and we agree that one would not expect to see any pairwise correlations between neurons. In Fig. 3b we show that for the learned representations of different contexts the place fields of neurons are indeed uncorrelated between contexts, as expected. When we talk about rotations (or orthogonal transformations) later what we mean is the rotation of the entire population vectors. In that case, the pairwise correlations (before and after) of single neurons are not preserved. In fact, one could use a full permutation of the population vector as special case of an orthogonal transformation. In that case, since the place fields of different place cells are in different locations, the neurons' place fields would be uncorrelated. We hope this answers the question, and we are of course happy to provide more details on how we went about the rotations if anything is still unclear. 5. We agree that especially the use of explicit cartesian coordinates is biologically implausible. We just want to point out that in the RNN case we are able to use only relative distances in the input (velocity signal). Furthermore, for the ground truth labels, we also just need relative distances as opposed to actual euclidean coordinates (see the $x_t-x'_t$ term in the objective function). Internal signals of relative distance traveled have been shown to exist in the brain, for example in the form of speed cells (see [Kropff et al. 2015](https://www.nature.com/articles/nature14622)) and head direction cells (see [Muller et al. 1996](https://www.sciencedirect.com/science/article/pii/S0959438896800730?via%3Dihub)). Lastly, we want to mention that we can and have also trained an RNN model that does not require the initial location as an initialization but instead is initialized with a zero-state. We included a figure showing the resulting representations in the attached PDF (see Fig. 6). In conclusion, while in the naive feed forward case we do rely on implausible explicit positions, there are ways to make our model work with only relative distances which make it a lot more biologically plausible and leads to similar representations. These models solve the same objective function given in equation 2.
Rebuttal 1: Rebuttal: We would like to sincerely thank for their positive and knowledgable feedback. We attached a PDF including supplementary figures to address some of your major concerns. Those figures are referenced in our individual rebuttals. Notably, we have included ratemaps of an RNN model that has been trained on only relative distance, without initializing it with the initial euclidean position. In this case, we do not need explicit coordinates, making it more biologically plausible. Note that this model can also easily be extended to accommedate sensory inputs. We believe that including your suggestions in our revised camera-ready version will strengthen manuscript even further. We hope we were able to address all questions and concerns in a satisfactory manner. Of course, we are happy to discuss any additional points or answer questions that you might still have. Pdf: /pdf/ab9035d2a094b88470bd24ca2fc4cd00090d6bdf.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: How do place cell responses to location and context emerge? The authors propose that place cell response emerge from a similarity-based object function that encourages places that are closer in space to be encoded as more similar representations. Specifically, the authors find that their model learns place-like representations (operationally defined here as the peak activation locations spanning the training arena and being able to decode position from the model's output). They also find that, when the model and objective function are extended to include context, the model's rate maps change as a function of context, and these changes in the model's representations (due to context changes) are orthogonal transformations. Strengths: Overall, I applaud the effort to use machine learning techniques as task-performing models of neural activity. The paper offers an interesting perspective on remapping representations based on context information. The idea that remapping can be accomplished by orthogonal transformations could also be applied in future work to other types of neural networks and/or modeling different brain regions. The writing is clear and well-organized. The figures are excellent, well-made, and easy to understand. Weaknesses: - My major concern is that the input to the feedforward network is the ground truth location (the animal's position as Cartesian coordinates). It's not particularly impressive that a model that directly receives position information is able to encode position information, nor that we can decode the position from its representation. If their main research question is "how place cell responses emerge" it doesn't make sense to feed them place information directly. - Beyond the potential redundancy issue of having place information be both the input to and output from the models, the bigger issue is that their model does not mimic the task that the brain (or place cells more specifically) need to learn. The brain needs to figure out location through its sensory data without being given veridical Cartesian coordinates. The recurrent model (which only has access to velocity information) is much more interesting, but it only used in one out of 4 experiments. - The authors describe their objective function as "a minimal, similarity-based objective," but it doesn't seem particularly minimal. Their objective function include five different constraints (page 2, lines 65-69), not including the version for incorporating context information. It would be helpful for the authors to spend more space in the paper describing their objective function in a way that highlights its simplicity. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1. Why are there different inputs given to the feedforward vs. recurrent models? Q2. Relatedly, does the objective function get modified for the recurrent model (since its inputs are velocity rather than the position coordinates)? Q3. If the feedforward model (without context) only receives Cartesian coordinates as inputs (input dimensionality of 2), why does the model need multiple layers with 64 and 128 hidden units, respectively? In the world of machine learning, the model is quite small, but for only encoding 2 numbers, it might be over-parameterized. Q4: What is the difference between Fig 4g and 4h? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their well-structured and insightful feedback. In the following, we want to address some of the concerns and answer the questions. We absolutely understand the concern about the access to euclidean position in the feedforward model. Here, the RNN model offers a variant where explicit positions in the input are not needed. We show in Fig. 4 that we get very similar representations with this model suggesting that the model can learn to path integrate and solve the objective using only relative displacements. Such signals have already shown to exist in MEC, for example in the form of speed cells (see [Kropff et al. 2015](https://www.nature.com/articles/nature14622)) and head direction cells (see [Muller et al. 1996](https://www.sciencedirect.com/science/article/pii/S0959438896800730?via%3Dihub)). Even for the labels in the objective function one could simply use the relative distance ($x_t-x'_t$) instead of the explicit cartesian coordinates. The choice to use those was mainly to keep things simple. Again, the above mentioned internal signals could provide this information. In addition to the results presented in the manuscript, we have included results for a variant of our RNN model in the attached PDF (see Fig. 6). This model does not even need access to the initial position and therefore can be trained and used without any explicit coordinates, relying only on relative distances. We believe that this makes our model much more biologically plausible as this is, as you say, closer to the problem the brain has to solve. Your comment on the simplicity of the objective function is a very good point. We are planning to spend more time on making the purpose of the different terms more clear in the camera-ready version. To us, the simplicity lies in the fact that basically we are just saying similar locations (or contexts) in the outside world should also be similar in neural space, as captured in equation (2). It is worth mentioning that the position and the context signal are incorporated in the loss in the very same manner. To prove our main point, the exponential terms alone are enough. The regularization term helps ensure rather small activations, while the $\beta$ parameter allows our neural representation to keep a certain similarity even for very distant physical locations. This leads to the nice property that neurons can be reused (as reusing the same neuron in a different context always leads to a certain similarity). Answer to the questions. 1. We are feeding in different inputs in order to show that the model does not necessarily need explicit euclidean coordinates but can also rely on relative distances (velocity) and path integrate those to solve the problem of representing close physical point similarly and far points dissimilarly, hence solving the same objective function. By doing so we are trying to provide a more biologically plausible model that does not need the ground truth location. In the new version we attempt to make the motivation behind this more clear. 2. The objective function is the same for both models. So even though the inputs are velocities in the RNN case, the model still has to solve the problem of representing similar euclidean coordinates similarly and dissimilar ones dissimilarly. The only labels required are distances and the initial position. However, as we show in Fig. 6 (in the additional figure panel provided along with this rebuttal), when using longer trajectories our model can even solve the problem without access to the initial position, and therefore does not need any explicit position information. 3. That's a very good question. Based on the theory on deep neural networks acting as universal function approximators, we wanted to make sure our network is deep and expressive enough. You are absolutely right in that our network is over parameterized. Here, our goal was not to come up with a low-dimensional solution but instead have the neural network learn a representation similar to those of Hippocampal cells. 4. Thank you for pointing out the lack of clarity in this figure. In Fig 4g we are showing multiple single trajectories in the UMAP space while in Fig. 4h we sampled from a lot of trajectories and averaged the binned activity over positions. We will make sure this is more clear in the camera-ready version. --- Rebuttal 2: Comment: Thanks to the authors for their thorough and clear reply! The additional results in the rebuttal attachment are particularly convincing to me. If the authors are going to include this in their camera-ready version (either in the main manuscript or the appendix), then I would like to change my score. Can the authors confirm that they will be adding these new analyses? --- Rebuttal Comment 2.1: Comment: Given that we're on the last evening before the reviewer response period is over, I'm going to go ahead and raise my score on the assumption that the authors will include the additional results in the rebuttal attachment in their camera ready. (My comment was originally not visible to the authors and I only recently realized that and updated it, so they likely would not have time to respond.) --- Reply to Comment 2.1.1: Comment: Thank you for your positive feedback and for considering an increase in your score based on our additional analyses. We are pleased to confirm that we will include these new results into the camera-ready version of our manuscript. We appreciate your valuable input.
null
null
null
null
null
null
Group and Shuffle: Efficient Structured Orthogonal Parametrization
Accept (poster)
Summary: In this paper, the authors proposed a new structured matrices, namely GS-matrices. It unifies and generalizes structured classes from previous works. Compared to the previous BOFT, it further improves the parameters and training efficiency. Experiments on language processing, text-image generation, and image classification have demonstrated the performance of GSOFT. Strengths: 1. The writing is easy to understand. 2. The proposed GSOFT has been applied to various networks on various downstream tasks. Weaknesses: 1. It's better to provide the training time comparison between BOFT, LoRA, and GSOFT. It seems that some results are lost in Table 2. 2. It's better to discuss more about single GSOFT and double GSOFT. It seems that the results in Table 1&3 are from the single version and the results in Table 2 are from the double version. 3. In Table 2, it seems that more learnable parameters result in lower CLIP scores. Maybe more discussions are required. Technical Quality: 3 Clarity: 3 Questions for Authors: My main concerns are listed in the weaknesses. 1. It's better to demonstrate the performance of the proposed method on recent popular GPT series methods (e.g., Llama2, Llama3, Mixtral). This may expand the influence of this method in the field. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Training times and double GSOFT.** Thank you for raising these questions. We have fixed these issues in the top-level comment. **CLIP scores.** Thank you for pointing out this observation. Typically, the more parameters are trained the more easily the model overfits. This results in higher image similarity and lower text similarity. In addition, if a model with a large number of training parameters is trained for a long time, the image similarity can often start to decrease as the model starts to collapse and artefacts start to appear. Models with fewer trainable parameters overfit less, but require longer training to capture the concept carefully, and usually have an upper limit on the maximum image similarity: at some point the increase in image similarity becomes small, while text similarity starts to decrease dramatically. Therefore, the very common result of a usual fine-tuning is either an overfitting with poor context preservation or an undertraining with poor concept fidelity. The orthogonal fine-tuning shows a different behaviour. The model with less trainable parameters can be trained longer (OFT, BOFT) and capture the concept more carefully without artefacts. At the same time, it overfits less and shows higher text similarity. Indeed, this needs to be discussed and we will add this to the paper, thank you. **GPT series methods.** While we certainly agree with your point, RoBERTa experiment took us multiple weeks of compute time. Conducting experiments with multi-billion models would require the amount of computational resources that we unfortunately do not currently have. Nevertheless, we believe that our experiments are of interest, as they show the comparison with the closest competitor BOFT on a different domain apart from diffusion experiments. --- Rebuttal Comment 1.1: Title: Thanks for Response Comment: Thanks for the authors' response. My main concerns have been addressed. Thus, I will keep my score. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the response!
Summary: This paper follows the orthogonal fine-tuning paradigm that uses orthogonal matrices for adapting the weights of a pretrained model. It introduces a class of structure matrices, defined as GS-matrices, that can well be used to conduct orthogonal matrix. Based on GS-matrices, It proposes structured orthogonal parametrization and studies its performance in the orthogonal fine-tuning framework. Experiments are conducted on two standard setting ups: downstream task fine-tuning in language modeling and text-to-image diffusion tasks. It also claims the proposed method can be used to construct orthogonal convolutions for 1-Lipschitz neural networks. Strengths: The description of this paper is very clear, and the overall presentation is good. It clearly describes how this paper motivates from orthogonal fine-turning. The proposed method using structured matrix to conduct orthogonal matrix is interesting. Weaknesses: 1.The experiments are not well conducted to support the claims/effectiveness of the proposed method: (1) This paper spends many words to claim the efficiency of the proposed methods. However, this paper does not provide the experiments to show the wall-clock time in the natural language understanding and subject-driven generation. I think this paper should provide the wall-clock time when comparing to baselines in Table 1 and Table 2, to show the efficiency of the proposed methods, compared to the baselines. What is the results of wall-clock time of these methods in Table 1 and 2? (2) This paper should conduct some ablation study to the Double GSOFT. It seems that Double GSOFT is the key to improve the performance, based on the description of this paper, but the double idea is apparently incremental. Besides, why Table 1 show the results using GSOFT, while Table 2 using Double GSOFT? Why not provide both in the experiments or using the best one (maybe Double GSOFT) with ablation study? I think this paper should address this point in the rebuttal. 2.This paper claims that “Nevertheless, parametrization of orthogonal matrices is a challenging task” In Line 19. I do not think so. There are many previous solutions to conduct orthogonal matrix in DNNs before 2019[1,2,3,4], even for conducting orthogonal convolutions [5,6]. And there are merits and drawbacks among these methods. I think this paper should well discuss these previous methods and take credits to them. 3.The original motivation of this paper is for parametrization efficient finetuning by using orthogonal fine-tuning. Meanwhile, it also wants to claim its contributions to the application orthogonal matrix benefits, E.g., conducting 1-Lipschitz network for robustness. But this claim is not well supported by experiments (There are many other orthogonal convolution methods(e.g., [5] and [6]), not only the baselines in the paper. E.g. SOC). Besides, this paper should pay more attention on discussing the orthogonal techniques in DNNS during the related work section. **Ref:** [1] On orthogonality and learning recurrent networks with long term dependencies, ICML 2017 [2] Learning unitary operators with help from u(n), AAAI 2017 [3] Orthogonal recurrent neural networks with scaled Cayley transform, ICML, 2018 [4] Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks, AAAI 2018 [5] Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks, NeurIPS 2019 [6] LOT: Layer-wise Orthogonal Training on Improving ℓ2 Certified Robustness, NeurIPS 2022 Technical Quality: 2 Clarity: 3 Questions for Authors: The questions in weakness 1 Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.** Thank you for your comments. We added compute times and comparison to the standard GSOFT for subject-driven generation (see the top-level comment). In the NLP tasks, we are also faster or on par with the BOFT approach. Compared to LoRa we are several times slower, but observe a slight quality boost on the scale we consider. The BOFT paper shows more quality gain for larger models like Llama. Unfortunately, such experiments require much more computational resources. **W2 and W3.** Thank you for pointing out these papers. We will be sure to make a more detailed survey in the revised version. To be more specific, [1]-[3] work in the setting of dense matrices and try to parameterize essentially the whole manifold of orthogonal matrices. For $n \times n$ matrices this requires computing inverses or exponential maps of $n \times n$ matrices at every optimization step. This takes $O(n^3)$ time, which can be computationally challenging for larger architectures. Moreover, such a parametrization utilizes $\mathcal{O}(n^2)$ trainable parameters, which makes it inapplicable for PEFT (see the OFT paper [Qiu et. al, NeurIPS '23] for more details). Our method is different in a sense that it provides a trade-off between expressivity (describing only a subset of the orthogonal manifold) and efficiency. For example, for a dense $n \times n$ matrix, when setting a number of blocks equal to $\sqrt{n}$, our method uses $O(n \sqrt{n})$ parameters and does $O(n^2 \sqrt{n})$ computations at optimization steps. Work [4] applies orthogonal parametrization to a reshaped convolution tensor. Although it is indeed a widely used heuristics, it does not formally lead to an orthogonal convolution mapping and certified robustness. The work [5] was outperformed by the SOC method that we utilize (there is a comparison in the SOC paper). Finally, in [6], the authors utilize periodicity of convolution and their padding is not equivalent to a widely-used zero-padded convolution. What is more, the survey [7] suggests that [6] and other 1-Lipschitz architectures yield worse robustness metrics than SOC, which is why it serves as a baseline in our paper. [7] B. Prach, et al. "1-Lipschitz Layers Compared: Memory Speed and Certifiable Robustness." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. --- Rebuttal Comment 1.1: Comment: Please let us know if you have any follow-up questions that need further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that might be helpful.
Summary: This paper introduces a new way of constructing orthogonal parametrization for efficient model fine-tuning. The development of the proposed method is motivated by: naive orthogonal fine-tuning, which can be too restrictive due to how it constructs the block-diagonal orthogonal matrix; and improved methods with dense orthogonal matrix can be too expensive in terms of both computation and parameters. The authors introduce a new way of constructing orthogonal matrix by an alternating product of block-diagonal matrices and several permutations, with advantages over previous methods justified theoretically. Experiments on language model, stable diffusion, and standard conv models are presented. Strengths: - The proposed method is able to significantly reduce the parameter and computation compared to the previous method on dense orthogonal parameter tuning. - The overall writing of this paper is good. And the presentation is well organized. - The core advantages of the proposed methods are justified with theoretical supports. The development of GS Orthogonal Convolutions makes it more generalizable to more applications. Weaknesses: My major concern with this paper is the empirical justifications. **Small experiment scale.** All experiments are conducted in very small scale. Specifically, the language models used in this paper are relatively old and outdated; Only few-shot personalization experiments are reported in the stable diffusion experiments; and the final conv application is only conducted on a tiny network on cifar100. This makes it very hard to objectively evaluate the proposed method's practical effectiveness. **Missing evaluations** If I'm not missing anything, the entire *training time* row of Table 2 is empty. Training time and memory are the core advantages compared to the previous methods. While it is true that the method is able to reduce the parameters and computations by numbers, significant speed-up and memory reduction in practice can still be a huge support to the paper's contribution. **Missing comparisons** The comparisons reported in this paper are mainly on LoRA and other orthogonal tuning methods. However, without the comparisons against other improved LoRA methods, such as DoRA, it can be hard to position the contribution and impact of this work. Technical Quality: 3 Clarity: 3 Questions for Authors: - Does the SVD need to be performed in every optimization step? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitation discussions presented in Section 9 focus mainly on how the model compares to LoRA and BOFT. The proposed method still requires more parameters than other improved LoRA methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Small experiment scale.** We do understand your concerns regarding the scale of experiments. For diffusion experiments, the few-shot setting is one of the most challenging tasks for fine-tuning. On the one hand, the model should carefully capture the target concept, but on the other hand, it should preserve it's ability to follow the target prompt. This setting is one of the main motivations for orthogonal fine-tuning (OFT), as it additionally helps to avoid overfitting and was used in the pioneering work with the OFT paradigm. Despite the few shot setting of a task, we report results accumulated over **30 different concepts and 25 contextual text prompts**. In total, there are 750 unique concept-prompt pairs and **a total of 7500 images for each baseline for robust evaluation**. And our evaluation includes different hyperparameters for each baseline method. Notice that BOFT only presents a visual comparison on $11$ different concepts with one fixed set of hyperparameters and does not provide a qualitative evaluation. Thus, our evaluation in this task is more comprehensive and gives more insight into how different models perform with different parameters. For 1-Lipschitz architectures, we took a standard neural network that was specifically developed for smaller datasets. We are not aware of any other architecture baselines that guarantee exact 1-Lipschitzness and are developed for larger datasets such as ImageNet. The works we aware of do not go beyond smaller datasets in their experiments, see e.g. [1, 2, 3]. One of the factors for this absence could be the high computational complexity associated with 1-Lipschtiz layers. Finally, regarding the NLP experiment, the main concern for us was the computational resources required for running larger models. RoBERTa experiment took us multiple weeks of compute time. Conducting experiments with multi-billion models would require the amount of computational resources that we unfortunately do not have. Nevertheless, we believe that our experiments are of interest, as they show the comparison with the closest competitor BOFT on a different domain apart from diffusion experiments. **Missing evaluations.** Thank you for pointing this out. We added training times for the Table 2 (see the top-level comment). Both GSOFT and Double GSOFT are faster (and better in terms of metrics) than BOFT. **Missing comparisons.** Thank you for this comment. We did an additional experiment with DoRA for subject-driven generation. We found that DoRA gives slightly better results compared to LoRA, but still shows significantly lower text similarity compared to GSOFT and DoubleGSOFT. **See Table 1 and Figure 1 from attached PDF file.** We also plan to add DoRA comparison for NLP experiment in the revisited version. **SVD during optimization steps**. We actually did not use SVD at any step of the process. The SVD steps we describe when projecting to the class of GS matrices can be a useful tool for future applications. For example, for different initiazation strategies of GS-matrices. [1] Singla et al. "Skew orthogonal convolutions." International Conference on Machine Learning. 2021 [2] M. Laurent, et al. "A dynamical system perspective for lipschitz neural networks." International Conference on Machine Learning. 2022 [3] X. Xiaojun, et al. "Lot: Layer-wise orthogonal training on improving l2 certified robustness." Advances in Neural Information Processing Systems. 2022 --- Rebuttal Comment 1.1: Comment: Please let us know if you have any follow-up questions that need further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that might be helpful.
Summary: This paper proposes a new parameterization for orthogonal matrices and applies this parameterization to orthogonal finetuning of foundation models. The new orthogonal parameterization is interesting and useful. The experimental results show some improvement over the baselines. Strengths: - The study of parameter-efficient orthogonal finetuning is very much in need and is an important topic at the mement. I believe this new parameterization will be useful for orthogonal finetuning in general. - The GS parameterization seems novel to me, although I am not well knowledgeable about the related orthogonal parameterization literature. However, in the application of orthogonal finetuning, the paper, to my knowledge, is the first work to apply this parameterization. - Specifcially, the GS parameterization resembles the Monarch matrices, as mentioned in the paper. To some extend, it can be viewed as a generalization of the Monarch matrices. - The experimental results look promising and demonstrate that its parameter-efficiency outperforms both OFT and BOFT in some scenarios. Weaknesses: - Although it is trivial to show that GS matrices can approximate arbitrary orthogonal matrices (as it generalizes the Monarch matrices), it is still important to discuss and (potentially re-use) the theoretical results from the Monarch matrices. - It can further strengthen the paper, if the authors can add the comparison of the Monarch parameterization in the orthogonal finetuning experiments. - The implementation should be released prior to its acceptance, otherwise it may be difficult for people to reproduce the results. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The major concern that I has is whether the results are easily reproducible. With that being said, the authors should consider to release the source code for the orthogonal finetuning experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions. First of all, we add a source code for our experiments (see the top-level comment). Regarding the Monarch matrices, the key drawback in the Monarch representation for us is that if one block-diagonal factor has many small blocks, then the other one has to have several large dense blocks. As a result, we are not able to obtain as few parameters as we want, which was critical for our experiments. For example, in the natural language understanding experiment, obtaining number of parameters similar to LoRa with rank 8 is impossible with Monarch Matrices, which would require at least 4x parameters with any block structure. On the other hand, additional flexibility introduced in GS matrices allows for matrices with small number of blocks in each of the matrices, making it possible to use this method with any parameter budget. At the same time, the order-p generalization of Monarch matrices from the follow-up paper, is only capable of working with very specific matrix sizes (see also Appx. C). What is more, the used permutation matrices do not allow for optimally filling the matrix with non-zeroes using a given budget of parameters. Following your suggestions, we will emphasize these points in the revised version and add more discussion concerning the Monarch matrix class. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: My concerns are properly addressed. I think this paper proposes an interesting parameterization for orthogonal matrices and is worth discussing at the conference. I will keep my current score and recommend for acceptance. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for the response!
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our article. We appreciate your feedback and the constructive remarks you have provided. Before giving detailed answers to the raised questions, we briefly reiterate the main contribution of our paper. We propose a new class of matrices, denoted as $\mathcal{GS}$ that generalizes previous matrix classes: Monarch matrices, their order-p generalization, and Block Butterfly matrices. We theoretically show how to choose permutations in the $\mathcal{GS}$-class to address the shortcomings of the previous approaches. We showcase our class on multiple application domains, including text-to-image diffusion models, language models and convolutional architectures. In addition to the specific answers below, we address several key common questions from the reviewers in the top-level comment: 1. We add an anonymised version of our code to ensure the reproducibility of the results (Reviewer dcw2). Anonymous link to the code was provided to AC in a separate comment following NeurIPS guidelines. 2. We extended the results in Table 2 for the whole Dreambooth dataset, that contains $30$ concepts of different categories, including pets, interior decoration, toys, backpacks, etc. For each concept, we used $25$ contextual text prompts, which include accessorisation, appearance and background modification. We also added DoRa baseline (Reviewer AkW1). As an ablation study, we conduct experiments in a non-doubled version of GSOFT (Reviewers yzRC, Kbzt). Both GSOFT and Double GSOFT show similar performance in terms of text similarity, outperforming other baselines with the same number of training parameters. However, as the number of training parameters decreases, Double GSOFT ($r=64$) provides better image similarity and shows the most balanced solution in terms of concept fidelity and context preservation among all baselines. **See Table 1 from attached PDF file.** 3. We provide training times for the Table 2 (Reviewers AkW1, yzRC, and Kbzt). Both GSOFT and Double GSOFT are faster (and better in terms of metrics) than BOFT and comparable to DoRA. **See Table 1 from attached PDF file.** 4. We also provide more visual comparison for subject-driven generation to illustrate the performance of the baselines on different prompts and concepts. **See Figure 1 from attached PDF file.** Pdf: /pdf/c0c0d362284b60d138dea9a0ac9568f22c5512a3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning with Fitzpatrick Losses
Accept (poster)
Summary: The paper proposes the Fitzpatrick loss, a tighter version of the Fenchel-Young loss. Fitzpatrick losses are defined based on the monotone operator theory. In particular, the Fitzpatrick function satisfies a tighter inequality than the Fenchel-Young inequality, leading to the tighter loss function. The authors elucidate the fundamental properties of the Fitzpatrick loss and present concrete examples, demonstrating that Fitzpatrick sparsemax/logistic losses indeed differ from the Fenchel-Young counterparts. The authors also show that the Fitzpatrick loss can be seen as the Fenchel-Young loss generated by target-dependent regularizes and that it is connected to generalized Bregman divergences, while the primal-dual nature of the Fitzpatrick loss crucially makes it convex. A lower bound on the Fitzpatrick loss is also presented, which is analogous to the Fenchel-Young counterpart. Finally, experiments compare Fitzpatrick and Fenchel-Young losses on classification tasks. Strengths: 1. The paper presents a novel framework for designing loss functions from link functions. Prior to the work, the Fenchel-Young loss is the only such framework. The current work opens a new direction for designing various other loss functions with rigorous foundations of monotone operator theory. I believe this is of great benefit to the machine-learning community. 2. The paper discusses the properties of the Fitzpatrick loss and its connection to other notions (Fenchel-Young losses and generalized Bregman divergences) in detail. This is helpful for readers who are familiar with either of the existing notions. Weaknesses: A weakness (not so serious) is that the experimental results are not very compelling. Still, I find the slight superiority of Fitzpatrick-sparsemax to Sparsemax noteworthy as their computation costs are almost the same. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. A special form of target-dependent regularizers is used for constructing the cost-sensitive Fenchel-Young loss ([Blondel et al. 2020, Section 3.4](https://jmlr.csail.mit.edu/papers/v21/19-021.html)). Does the Fitzpatrick loss have any relation with the cost-sensitive Fenchel-Young loss? 2. Is it possible to establish calibration of target and Fitzpatrick excess risks, as in [Blondel (2019, Proposition 5)](https://proceedings.neurips.cc/paper_files/paper/2019/hash/7990ec44fcf3d7a0e5a2add28362213c-Abstract.html)? The lower bound in Proposition 8 seems useful for this. 3. Regarding lines 164-165, is there a canonical class of convex functions $\Omega$ such that $D\_\Omega$ is convex in the second argument? 4. In the experiments, predictions are produced by $\hat y\_\Omega(Wx)$, and I can understand this since the Fitzpatrick loss has $\hat y_\Omega$ as the link function as in Proposition 1. On the other hand, given the discussion in Section 3.3, $y^*\_{F[\partial\Omega]}(y, \theta)$ also appears to be a Fitzpatrick counterpart of $\hat y\_\Omega(\theta)$, which, however, cannot be used in the test phase since ground truth $y$ is not available. So, when $W$ is learned with the Fitzpatrick loss, aren't there other possible methods for converting $Wx$ to a prediction other than $\hat y\_\Omega(Wx)$ (without knowing $y$)? #### **Minor comments** - l. 84. "$\hat y\_\Omega$ is monotone..." Does this "monotone" refer to the one in Section 2.3? If so, it is not yet defined there. - Eq. (4). Enclosing the terms after "argmax" in parentheses will make it easier to read. - l. 174. that that. - Proposition 8. Reminding that $\Psi_y$ is defined as with $\Omega_y$ in Section 3.3 is helpful; alternatively, it may be better to define $\Omega_y$ as a general notation in Section 3.3 or earlier. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitation regarding computation costs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive and constructive feedback. > A special form of target-dependent regularizers is used for constructing the cost-sensitive Fenchel-Young loss (Blondel et al. 2020, Section 3.4). Does the Fitzpatrick loss have any relation with the cost-sensitive Fenchel-Young loss? There is similarity but the additional target-dependent term is linear in their case, while it is a general Bregman divergence in our case. We also tried to write the multiclass hinge loss as a Fitzpatrick loss, but we believe this is not possible. > Is it possible to establish calibration of target and Fitzpatrick excess risks, as in Blondel (2019, Proposition 5)? The lower bound in Proposition 8 seems useful for this. This is an interesting question but we leave the study of calibration guarantees to future work. > Regarding lines 164-165, is there a canonical class of convex functions such that is convex in the second argument? We do not know a class of convex functions $\Omega$ such that the generated generalized Bregman distance $D_\Omega$ is convex in the second variable. We can see the difficulty of finding such convex functions by considering the case of differentiable $\Omega$ on the interior of its domain. In this case, we recover the classical Bregman divergence, $D_\Omega(y,y’) = \Omega(y) - \Omega(y’) - \langle y-y’,\nabla\Omega(y’) \rangle $. When $\Omega$ is the Shannon negentropy or the squared 2-norm, $D_\Omega(y,\cdot)$ is convex for all $y \in \mathrm{dom}~\Omega$ because $y’ \mapsto \langle y’, \nabla \Omega(y’) \rangle - \Omega(y’) $ is convex, which is usually not the case. > when is learned with the Fitzpatrick loss, aren't there other possible methods for converting $Wx$ to a prediction? It is indeed possible that other methods exist. For example, as an heuristic, we could try to replace the unknown $y$ with the average of the label proportions in the dataset. The calibration study could further suggest new principled methods. > l. 84. "is monotone..." Does this "monotone" refer to the one in Section 2.3? If so, it is not yet defined there. Thank you, we added a reference to section 2.3. > Eq. (4). Enclosing the terms after "argmax" in parentheses will make it easier to read. Done, thanks. > Proposition 8. Reminding that $\Psi_y$ is defined as with $\Omega_y$ in Section 3.3 is helpful; alternatively, it may be better to define as a general notation in Section 3.3 or earlier. Done, thanks. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers to my questions. I retain my score of 7 (Accept).
Summary: This article presents a refreshing take on losses used in ML, with solid motivational examples and a several of interesting and beautiful results in pure convex analysis. Strengths: The Fitzpatrick function is a well-known tool for achieving very pure-theoretical results like Minty's theorem in convex analysis. However, before reading this article, I was under the impression that the Fitzpatrick function was just a theoretical tool with limited applications in-practice... Until now! This paper takes a beautiful tool from pure convex analysis, spruces up some very interesting and new results, and presents it as a new application in ML with some very solid justification concerning sharper inequalities in comparison to the classical class of Fenchel-Young inequalities. Weaknesses: My main regret is that, due to the reviewing load, I was not able to fully check all of the proofs presented in the appendix. The claims *in the article* are reasonable, interesting, well-presented, and thorough; however, I am quite concerned that there may be some (unintentionally) omitted hypothesis, or holes in the proofs in the appendix (e.g., please also see my statement about Compactness of the domain in "limitations" section), because this often happens in ML conference articles. I wish that an appendix counted as the "reviewed" part of the article, but alas I simply do not have the time to check every detail. I am going to suggest that this paper is accepted, but I suggest that the authors please take care to revise based on the following minor points. Minor comments: - Citations appear to be out of numerical order when they are cited - Line 30: In what sense is "proper" meant? All losses I'm aware of are proper functions (using the convex analysis definition). - Line 56: Is R_+ numbers which are strictly positive, or just positive? - Line 59: The set must be nonempty for the projection to be defined. - Line 62: This identity only holds when \Omega is differentiable *and convex*! Otherwise, this definition of the subdifferential produces the emptyset in areas of concavity. - Line 66: Please provide a citation. - Line 68: Does this function need to be Legendre? - Line 101: Monotone is defined here, but mentioned earlier in the article; would be nice to have the definition at the beginning. - Definition 1: Within the text of the definition, "dom Omega" appears before Omega is introduced. - A bit more discussion would be nice on specifically which problems (4) can be calculated. - Line 132: "Above expression" referring to "\nabla^2 Omega" is ambiguously defined, since in the constrained case Omega is not differentiable. Technical Quality: 4 Clarity: 4 Questions for Authors: - Line 69: As far as I've seen in most convex analysis books and papers, this convention is a bit nonstandard; typically, "+infinity + (-infinity)" is *undefined*. Please comment on which results in this article break if the quantity "+infinity + (-infinity)" is undefined. I.e., if this is "not a valid move", which of your results still hold? It is very important to clarify which results hold under varying algebraic models of convex analysis. - Line 157 / Prop. 7: the relationship between the claim in the equation following Line 157 and Proposition 7 is unclear. How does the claim in line 157 follow from Proposition 7? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The mathematically precise statements (propositions, lemmas, theorems/etc.) should state **all** of their assumptions. In particular, the authors do not repeat that a compact domain of the objective function is required for several of their results. (Unless I'm missing something and compactness is not actually required?) The authors do a great job of explaining the results of their experiments. More numerics is always a plus, but since the article has so many strong theoretical contributions, I am quite happy with this as-is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive and detailed constructive feedback. > Citations appear to be out of numerical order when they are cited This is because the references are sorted alphabetically by author name. > Line 30: In what sense is "proper" meant? All losses I'm aware of are proper functions (using the convex analysis definition). In line 30, “proper” refers to “proper losses” or “proper scoring rules”, which we also mentioned in line 21 with citations (Gneiting et al. 2007) and (Grünwald et al. 2004). This notion of “proper” is not related to “proper functions” in convex analysis. > Line 56: Is R_+ numbers which are strictly positive, or just positive? We added $\mathbb{R}_+ := [0,+\infty)$ to the notation section. > Line 59: The set must be nonempty for the projection to be defined. Fixed. > Line 62: This identity only holds when \Omega is differentiable and convex! Otherwise, this definition of the subdifferential produces the emptyset in areas of concavity. Fixed. > Line 66: Please provide a citation. We added Proposition 11.3 from R. T. Rockafellar and R. J-B. Wets. Variational Analysis. Springer-Verlag, Berlin, 1998. > Line 68: Does this function need to be Legendre? Here, $\Omega$ needs not to be Legendre, as in Equation (1), $\sup_{\theta’ \in \partial \Omega(y’)} \langle y-y’, \theta’ \rangle$ is always well defined, although it can take the value $-\infty$ or $\infty$. This is why we use the convention $+\infty +(-\infty) = +\infty$ in this definition. > Line 101: Monotone is defined here, but mentioned earlier in the article; would be nice to have the definition at the beginning. On line 84, we added a reference to section 2.3. > Definition 1: Within the text of the definition, "dom Omega" appears before Omega is introduced. A bit more discussion would be nice on specifically which problems (4) can be calculated. Fixed. > Line 132: "Above expression" referring to "\nabla^2 Omega" is ambiguously defined, since in the constrained case Omega is not differentiable. The reviewer is right, we have neglected to mention that the function is twice differentiable on the interior of its domain. This is now fixed. > Line 69: As far as I've seen in most convex analysis books and papers, this convention is a bit nonstandard; typically, "+infinity + (-infinity)" is undefined. Please comment on which results in this article break if the quantity "+infinity + (-infinity)" is undefined. I.e., if this is "not a valid move", which of your results still hold? It is very important to clarify which results hold under varying algebraic models of convex analysis. This question of adding conflicting infinites has been thoroughly studied by Moreau in: Inf-convolution, sous-additivité, convexité des fonctions numériques. J. Math. Pures Appl. (9), 49:109–154, 1970. In this paper, Moreau introduced two additions for $\mathbb{R} \cup \{-\infty,+\infty}$. One of the two Moreau’s conventions for summing the infinites can be found under the name of inf-addition in Chapter 1 Section E. in R. T. Rockafellar and R. J-B. Wets. Variational Analysis. Springer-Verlag, Berlin, 1998. To make a long story short, one of them is used for minimization and the other for maximization. Using the inf-addition on $\mathbb{R} \cup \{-\infty,+\infty}$ allows for things like $\min_C f$ coinciding with $\min_{\mathbb{R}^n} f(x) + \iota_C(x)$, where $C \subset \mathbb{R}^n$, and $f : \mathbb{R}^n \to \mathbb{R} \cup \{-\infty,+\infty}$. As the generalized Bregman divergence $D_\Omega$ is to be thought of as a generalization of a distance, it is generally a quantity that will be minimized. So, the inf-addition fits the definition of the generalized Bregman divergence in line 69. In the proof of Proposition 7, the choice of inf-addition is corroborated by the fact that we want to calculate, for some fixed $y\in \mathbb{R}^n$, $\big( \Omega(\cdot)+ D_\Omega(y,\cdot) \big)(\theta) = \sup_{y’} \langle y’, \theta \rangle - \big( \Omega(y’)+ D_\Omega(y,y’) \big), \forall y’ \in \mathbb{R}^k$. The minus sign transforms the inf-addition into the sup-addition when distributed. If we had chosen the sup-addition in the definition of the Bregman divergence $D_\Omega$, we would here calculate a supremum of an inf-addition, thus entering unknown territory. To conclude, the natural choice for addition in the definition of the generalized Bregman divergence is the $+\infty + (-\infty) = + \infty$. > Line 157 / Prop. 7: the relationship between the claim in the equation following Line 157 and Proposition 7 is unclear. How does the claim in line 157 follow from Proposition 7? We are not sure if we understand the question correctly. Line 157 is a definition, not a claim. It is what we call the target-dependent regularization $\Omega_y$. With the definition in line 157, Proposition 7 shows that $F[\partial \Omega](y, \theta)$ coincides with $\Omega_y(y) + \Omega_y^*(\theta)$. And therefore the Fitzpatrick loss generated by $\Omega$ coincides with the Fenchel-Young loss generated by $\Omega_y$. --- Rebuttal Comment 1.1: Comment: I thank the authors very much for their responses and clarifications. I appreciate the authors' motivation for using this model of defining $+\infty - (+\infty)$ as $+\infty$. I am aware of the Moreau article; however, other more modern convex analysis books (e.g., *Convex Analysis and Monotone Operator Theory in Hilbert Spaces* by Bauschke and Combettes) treat $+\infty - (+\infty)$ as an undefined term. This is an important distinction, since convex analysts certainly still use both notions in modern work. While I am still overall quite impressed with the article, I would like to see more emphasis on this distinction in the final version. Overall, it was a pleasure to review this article, and I look forward to seeing its final form in the future. I thank the authors for their time.
Summary: This paper proposes a class of loss functions called Fitzpatrick loss, based on the Fitzpatrick function known in maximal monotone operator theory. It can be shown that building a loss function with the proposed method, can give a convex loss function that lower bounds the Fenchel-Young loss under the same choice of link function for prediction. Extensive analysis provides in the paper also shows the relationship between Fitzpatrick loss and Fenchel-Young loss. Finally, experiments were provided to validate the usefulness of Fitzpatrick loss. Strengths: 1. The proposed method is theoretically sound and novel to the best of my knowledge. This give rises to a way to construct a convex loss function systematically that lower bounds the family of Fenchel-young loss under the same link function for prediction. 2. Not only the new class of loss function is proposed, but the characterization of the relationship between the proposed class of loss functions and existing loss was also provided to understand the research in this direction better. 3. The paper is well-written overall and is friendly for also new people in the field who is not familiar with Fenchel-young loss function. Weaknesses: Although it is nice to also provide the experiments in the paper, I found that the experiment failed to motivate the usefulness of Fitzpatrick loss family in the sense that the performance is not preferable overall. I think no so many insights were provided when Fitzpatrick loss is more useful than existing losses in the literature. The proposed loss is also more computationally demanding. Perhaps different kinds of experiments would give more values to the paper. From the current experiments, I think we still need lots of work to study when to use Fitzpatrick loss in practice. My current score is Weak accept (6) as I put importance to the theory to advance the understanding of Fenchel-young loss and Fitspatrick loss function quite high and believe these are the main contributions of this paper, rather than the experimental result. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since Fitzpatrick loss lower bounds the Fenchel-young loss for the same link function, is it possible to find some use-cases that the proposed loss (e.g., Fitzpatrick-sparsemax) is preferable? For example, could it be more useful when we want to have sparse probability output and we have to learn under the case where there are outliers/noisy data? 2. It is said that we can provide a relationship between Fitzpatrick loss and Fenchel-young loss by using "target-dependent function" in Section 3.3 (Prop. 7). I think the result is very interesting and I have two questions regarding this finding. - 2.1 Since we need target-dependent function for $\Omega_y$ to make the relationship holds, does the original Fenchel-Young loss supports target-dependent function case? If so, then this should be no problem and we can state that all properties of Fenchel-young loss inherits to Fitzpatrick loss. - 2.2 Is it safe to say that Fitzpatrick loss is a special case of Fenchel-Young loss? Perhaps not? - 2.3 Can we also say that any Fenchel-young loss can be rewritten in a form of Fitzpatrick loss generated by different $\Omega$? 3. In the conclusion section, there was a discussion about "there can only be one loss function associated with a certain link function". I am not sure about this statement for the definition of loss function here. Is it about a "proper composite" loss function, a "classification-calibrated" loss function, or a convex + (proper composite/classification-calibrated) loss function? Maybe we can be more explicit here otherwise one might think about any function or small ad-hoc modification using the same link and use it as a loss function with not much problems. Minor comment 1. Typo: Line 99: also know as -> also known as 2. I feel it is a bit strange to say One loss is tighter than another loss since it is closer to zero everywhere. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The discussion was sufficient. The authors noted the challenge of computational problem and appropriately discussed the experimental results of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and for the feedback. > Although it is nice to also provide the experiments in the paper, I found that the experiment failed to motivate the usefulness of Fitzpatrick loss family in the sense that the performance is not preferable overall. We believe it is important to provide empirical results, even if they are partially positive and partially negative. What is important is that our conclusions are backed up by observations. Based on our experiments on 11 datasets, our conclusions are: - FY-logistic performs better or comparably compared to FP-logistic and is computationally cheaper. Therefore, FY-logistic remains the solution of choice when we want to use the soft-argmax link. - FP-sparsemax performs better or comparably compared to FY-sparsemax and is computationally equivalent. Therefore, FP-sparsemax is a viable alternative to FY-sparsemax when we want to use the sparse-argmax link. > The proposed loss is also more computationally demanding. It is true that it is more computationally demanding to use the FP-logistic loss than its FY counterpart, as the former requires solving a root-finding problem (e.g., by bisection), while the latter enjoys a closed form (using the log-sum-exp). However, for FP-sparsemax, the situation is different. Indeed, computing the loss (and its gradient) is as computationally demanding as its FY counterpart (Proposition 5). Both only involve a projection on the simplex. > Since Fitzpatrick loss lower bounds the Fenchel-young loss for the same link function, is it possible to find some use-cases that the proposed loss (e.g., Fitzpatrick-sparsemax) is preferable? For example, could it be more useful when we want to have sparse probability output and we have to learn under the case where there are outliers/noisy data? Based on our experiments on 11 datasets, we found that FP-sparsemax is slightly better than FY-sparsemax. We also tried to add artificially various degrees of noise to labels but we did not observe any major change in our conclusions. > 2.1 Since we need target-dependent function for to make the relationship holds, does the original Fenchel-Young loss supports target-dependent function case? If so, then this should be no problem and we can state that all properties of Fenchel-young loss inherits to Fitzpatrick loss. In principle, Fenchel-Young losses do not support target-dependent regularization functions $\Omega_y$. The problem comes at prediction time, as we cannot compute $\nabla \Omega_y^*$, since $y$ is unknown. However, viewing Fitzpatrick losses as Fenchel-Young losses with target-dependent $\Omega_y$ is useful because, as you point out and as we write in line 162, properties can be inherited easily from Fenchel-Young losses. For example, if $\Omega_y$ is strongly convex, then the Fitzpatrick loss associated with $\Omega$ is smooth. > 2.2 Is it safe to say that Fitzpatrick loss is a special case of Fenchel-Young loss? Perhaps not? The answer is no by the theory of representations of maximal monotone operators (Burachik et al., 2002). However, in the paper we introduce the notion of target dependent Fenchel-Young losses (which are not strictly speaking Fenchel-Young losses). Indeed, one of the contributions of our paper is to rewrite Fitzpatrick losses as Fenchel-Young losses with a target-dependent $\Omega_y$ (Proposition 7), while keeping the same target-independent link function $\nabla \Omega^*$. > 2.3 Can we also say that any Fenchel-young loss can be rewritten in a form of Fitzpatrick loss generated by different $\Omega$? As both Fenchel-Young and Fitzpatrick losses only depend on the subdifferential of the generating function, the question boils down to ask if a Fenchel-Young loss generated by some subdifferential can be rewritten in the form of a Fitzpatrick loss for a different subdifferential. The answer is no by Theorem 6.1 in (Burachik et al., 2002). > In the conclusion section, there was a discussion about "there can only be one loss function associated with a certain link function". I am not sure about this statement for the definition of loss function here. Is it about a "proper composite" loss function, a "classification-calibrated" loss function, or a convex + (proper composite/classification-calibrated) loss function? Maybe we can be more explicit here otherwise one might think about any function or small ad-hoc modification using the same link and use it as a loss function with not much problems. Thank you for catching. We indeed meant convex loss function. Otherwise, it’s indeed possible to compose the link with some other loss function but the resulting composite loss function would be nonconvex in general. We also omit trivial loss modifications such as multiplying the loss by a non-negative scalar. We fixed the typos reported by the reviewer. --- Rebuttal Comment 1.1: Title: Thank you for the author feedback Comment: I have read other reviews as well as the author rebuttal. Thank you very much for the detailed feedback. Thank you for clarifying my concerns and my potential misunderstanding. I agree with other reviewers and the paper that this paper provides a theoretical foundation for the Fitzpatrick loss which is novel to the best of my knowledge. Even though experimental results are not very strong and thus more investigation of empirical performance is needed, it can be studied in the future for some specific applications of interest. In this current form, I think the paper is worth an acceptance. As a result, I raised the score to 7 (Accept).
Summary: The paper describes an alternative approach to constructing loss functions. The authors propose a Fitzpatrick loss and compare it to Fenchel-Young loss and Bregman divergence. The work enumerates the basic properties of the introduced loss and compares it to existing loss functions by numerical simulations. Strengths: The overview of Fitzpatrick loss is nice. For the reader not acknowledged with the topic it is easy to get the main concepts. The authors also give examples and compare Fitzpatrick loss with Fenchel-Young loss. Weaknesses: I think the paper needs more thorough numerical evaluation due to the format of the conference. Although the properties of Fitzpatrick loss are given, the logical question is, why to use it instead of known loss functions for machine learning? At the moment, the authors presented comparison for classification problem over several datasets. Without broader experimental study, the manuscript seems more like a mathematical note. Therefore, I think that additional experiments will strengthen the paper. Moreover, the datasets are for classification, but the results are compared in terms of mean squared error. It seems more logical to compare using some metric for classification, i.e. accuracy, f1-score etc. Technical Quality: 3 Clarity: 3 Questions for Authors: The paper claims that the Fitzpatrick losses are tighter than Fenchel-Young. Can something be said from the statistical point of view? Does using new loss lead to better generalization, for example. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper does not have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and for the feedback. > I think the paper needs more thorough numerical evaluation due to the format of the conference. Our paper is first and foremost a theoretical contribution: - it introduces a new family of losses that lower-bound Fenchel-Young losses; - it is the first practical application of the Fitzpatrick function; - it advances the Fitzpatrick function literature, e.g., Proposition 7 is a novel result. That said, based on our empirical comparison on **11 datasets**, our main claim is that FP-sparsemax is a viable alternative to FY-sparsemax, as it performs better or comparably, and is not more computationally demanding. > Moreover, the datasets are for classification, but the results are compared in terms of mean squared error. It seems more logical to compare using some metric for classification, i.e. accuracy, f1-score etc. Following Blondel et al (2020), we are doing label proportion estimation experiments. Since the model outputs are probability distributions over the classes, the mean squared error is an appropriate measure for estimating the quality of prediction. It is known as the **Brier score**, in a probabilistic forecasting context. > Can something be said from the statistical point of view? Does using new loss lead to better generalization We leave additional theoretical guarantees such as calibration or generalization to future work, as we believe the paper is already quite packed with results (8 propositions in the main text). --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: Dear Authors, Thank you for addressing my comments. I decided to raise my score by 1 point.
Rebuttal 1: Rebuttal: We thank the reviewers for the very positive and constructive feedback, as well as the ACs for their editorial work. We summarize below our main replies to the reviewers (see reply to each reviewer for more details). * All comments have been addressed (minor typos, some assumptions that were in the text have been moved explicitly in the propositions). * Our paper is first and foremost a theoretical contribution (**8 propositions** in the main text), advancing both the literature on loss functions and the literature on Fitzpatrick functions. It is the first practical application of the Fitzpatrick function. * Following Blondel et al (2020), we are doing label proportion estimation experiments and we use MSE (also known as **Brier score**) for evaluation. * Based on our experiments on **11 datasets**, our conclusions are: - FY-logistic performs better or comparably compared to FP-logistic and is computationally cheaper. Therefore, FY-logistic remains the solution of choice when we want to use the soft-argmax link (a.k.a. softmax). - FP-sparsemax performs better or comparably compared to FY-sparsemax and is computationally equivalent. Therefore, FP-sparsemax is a viable alternative to FY-sparsemax when we want to use the sparse-argmax link (a.k.a. sparsemax).
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a family of losses based on the notion of a Fitzpatrick function, which takes the role of composable subdifferentials in Fenchel-Young losses. The resulting family of losses is parallel to Fenchel-Young losses in the sense that each Fenchel-Young loss has a shared link function with a certain Fitzpatrick loss, but the losses themselves differ. It is shown that these losses posses some desirable properties such as convexity, and that they are based on a type of duality that yields a smaller duality gap than the one based on the Fenchel-Young inequality. Representative Fitzpatrick losses, such as logistic and sparsemax losses are developed as examples of useful members of the family. Experiments show that the losses are mostly comparable in performance to their Fenchel-Young parallels, with the exception of the sparsemax loss that might be slightly improved in its Fitzpatrick version. Strengths: The approach is original and imports ideas from operator theory into ML. Theory is laid out neatly and in an easy to understand manner. Loss functions are clearly an important part of training any ML model, hence improvements in them can be very significant even if they are small, and I assume that further progress beyond this paper can be significant in practice as well. Weaknesses: The empirical utility of the developed losses in not entirely clear, since they do not offer a significant advantage over standard losses, and in some cases (e.g. Fitzpatrick sigmoid) may be more computationally demanding. In other losses such as the squared loss and hinge loss, the parallel Fitzpatrick losses are practically the same. Hence, the contribution here is mainly in exploring further definitions of losses that are associated with familiar link functions, but attach to them a loss function that is not the standard one that stems from the Fenchel-Young family. I am not an expert on this topic, so while I'm unsure whether the paper is interesting for its sub-community within ML, or for the wider NeurIPS audience (mainly because readers put a lot of emphasis on empirical results), I found it interesting enough to read hence I will give a borderline acceptance rating. Technical Quality: 4 Clarity: 3 Questions for Authors: No significant questions arise from reading the paper. It might be nice if the authors could comment on what potential they see for future research on this topic, and how Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors mention some limitations such as somewhat increased computation time, and acknowledge the mixed empirical results. I find this to be an adequate treatment of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and for the feedback. > The empirical utility of the developed losses in not entirely clear, since they do not offer a significant advantage over standard losses, and in some cases (e.g. Fitzpatrick sigmoid) may be more computationally demanding. Indeed, as we clearly acknowledged in the experiment section, FP-logistic does not bring much benefit compared to FY-logistic. However, FP-sparsemax brings benefits compared to FY-sparsemax: FP-sparsemax performs better than FY-sparsemax for some datasets and comparably for the others. Therefore, FP-sparsemax should be considered as a viable alternative to FY-sparsemax, as it is not more computationally demanding. Indeed, as explained in Proposition 5 and in the text below it, FP-sparsemax is based on the projection on the simplex, just like FY-sparsemax. Furthermore, the FP-sparsemax loss can also be trained by dual coordinate ascent, similarly to the FY-sparsemax loss (see Section 8.2, Blondel et al. 2020). Indeed, for $\Omega(y) = \frac{1}{2}\lVert y \rVert^2 + \iota_{\triangle^k}(y)$, dual coordinate ascent consists of computing these proximal operators for each subproblem associated with the sample $i$: - for FY-sparsemax: $\mathrm{prox}_{\frac{1}{\sigma_i} \Omega}(v_i)$; - for FP-sparsemax: $\mathrm{prox}_{\frac{2}{\sigma_i} \Omega}(\frac{v_i - y_i}{\sigma_i})$. Both $\mathrm{prox}$ have a similar computational cost, as they are both computed using formulas for $\mathrm{prox}_{\tau \Omega}(\eta)$, where $\tau \in \mathbb{R}$ and $\eta \in \mathbb{R}^k$, (see Section 8.4, Blondel et al. 2020). Therefore, FP-sparsemax and FY-sparsemax are equivalent in terms of computational cost, whether we use primal or dual training. > It might be nice if the authors could comment on what potential they see for future research on this topic Future directions include calibration guarantees, new link functions and more efficient training for the FP-logistic loss. --- Rebuttal Comment 1.1: Title: Post Rebuttal Update Comment: Thank you for the response. I understand that the point of the paper is not to introduce "better" losses than FY, and appreciate the paper for the results it presents. Therefore I will keep my recommendation to accept the paper.
null
null
null
null
null
null
Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions
Accept (poster)
Summary: This paper studies analytical computing of Partial Information Decomposition (PID) when the underlying distribution is stable distribution and convolution closed distribution. In particular, the paper considers BROJA-PID framework consisting of minimization problem of conditional mutual information over a set of distributions. Under this framework, computing PID requires to solve the minimization problem. The minimization problem gives solution zero whenever the underlying random variables are formed Markov chain. The main contributions of this paper are: 1) generalizing the Gaussian PID result to broader distributions by finding sufficient condition under which the mutual information in the minimization problem achieves zero; 2) connecting data fission and thinning results by showing some convolution closed distribution can be useful in construction of a solution to the minimization problem. The authrors also provide list of system examples for which their results are applicable. Strengths: This paper generalizes the result of PID computing for joint Gaussian distribution to a broader distribution family, some stable distributions and convolution closed distributions. The results in the paper provide a list of system models where the unique information is zero by exploring the properties of Markov chain and by constructing a distribution having Markov chain. Moreover, the paper provides a connection to the data fission/thinning and a way to construct upper bound using their results, which might be useful in computing PID for some systems. Weaknesses: While this paper extensively explores a sufficient condition (Markov chain) for computing the unique information, I am not sure how much this result contributes on the computing PID for general systems. Some of my concerns are: 1) Most of results come from the construction of distributions having Markov chain property. In my opinion, the results are just examples of underlying system models that induce Markov chain. 2) The assumption that the underlying conditional distributions should somehow linearly depend on the target random variable. This assumption seems quite strong in general system models. 3) Although the results are claimed as computing PID, the results are not applicable for general nonzero unique information. I think zero unique information case seems very restrictive in general system models, so the results are applicable very few cases. 4) Analysis of the proposed upper bound is not sufficient. Technical Quality: 3 Clarity: 2 Questions for Authors: I am curious how tight the proposed upper bound for other system models except Poisson distribution. Can the authors provide more simulation results under various settings? Particularly, does the upper bound work well for systems that does not include Markov chain property (zero unique information) or the assumptions in the paper? Moreover, can the authors provide comparisons with other numerical estimators for PID under various settings? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. While we agree that the assumption of linearity can be strong, this assumption has been made in previous works, e.g., Venkatesh et al. NeurIPS’23 uses a Gaussian assumption with linear dependence on M to calculate the PID of experimental neural data. Based on the reviewer’s suggestion, we have analyzed our proposed upper-bound for two more systems, namely those defined by negative-binomial and binomial distributions (see Fig 1a in the attached document). The setup is the same as the Poisson experiment (Section 6 of the manuscript). The system models used for negative-binomial and binomial are listed in Appx. B of the manuscript. We analyzed 10 different function pairs and considered M to have either 2 and 4 outcomes. We find that the upper-bound continues to be reasonably tight. We used the Poisson experiment in the manuscript to analyze systems that have non-zero UI in both X and Y. To do so, we sub-selected the Poisson systems which have both non-zero UI (shown in Figs 1c and 1d in the attached document), and showed the respective errors for these non-zero UI systems in Fig 1e. We find that the upper-bound continues to hold reasonably well for these systems having both UI terms as non-zero. It is not possible to apply the upper-bound when the assumptions in the paper are not satisfied. Particularly, for the Poisson experiment, the distribution shown in equation (316), given by Binomial$(Z, (f_2(M)-\delta^Y_{bias})/(f_1(M)-\delta^X_{bias}))$ which is used in constructing the upper-bound, would not be well-defined as it requires $(f_2(M)-\delta^Y_{bias})/(f_1(M)-\delta^X_{bias})$ to be less than 1. There is a naive extension to the upper-bound in case the assumptions are not satisfied, which is to construct the upper-bound distribution individually for each outcome of M. For example, assume M takes two values 0 and 1. Let us further assume that $(f_2(M=0)-\delta^Y_{bias})/(f_1(M=0)-\delta^X_{bias}) <1$ and $(f_2(M=1)-\delta^Y_{bias})/(f_1(M=1)-\delta^X_{bias}) >1$. Then, for M=0, we will construct the upper-bound distribution as detailed in the manuscript, and for M=1, we can swap all the variables pertaining to X with all variables pertaining to Y, which would change the distribution in equation (316) to Binomial$(Z, (f_1(M=1)-\delta^X_{bias})/(f_2(M=1)-\delta^Y_{bias}))$ for M=1 case, ensuring that this binomial distribution continues to be well-defined. Note that we have no theoretical reason to assume that this upper-bound should be tight. We show a similar empirical analysis for the Poisson experiment for 5 function pairs which do not follow the assumptions of the upper-bound discussed in this work (see Fig 1b. in the attached document). We find that this upper-bound is tight for some functions and loose for others. Given the time-constraints of author-rebuttal, we are unable to provide comparisons with other numerical estimators. Regardless, all our experiments are performed on discrete distributions for which the optimization problem in (2) is convex, and any numerical estimator used would converge to the global minimum. We choose Makkeh et al. Entropy'18 as the numerical solver as it implements an exponential cone solver to numerically solve equation (2) and empirically tended to be the fastest method for our experiments. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. As the author answers some of my questions, I will raise score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score and providing a thoughtful review.
Summary: The paper focuses on analytically expression of the BROJA-PID [12, 33]. By considering the stable distribution family, the authors are able to generalize the previous Gaussian PID result [17] (one of the uniqueness always vanishes for BROJA-PID) to a wider class of distributions. An upper bound for convolution-closed distributions is also given. Strengths: The BROJA-PID [12, 33] has been widely adopted due to the simplicity of its optimization form. This paper extend a famous Gaussian PID result [17] (one of the uniqueness always vanishes for BROJA-PID) to a wider class of distributions. This result simplifies the computation of many PID problems and provides a test bed for numerical PID estimators. Weaknesses: The proposed analytical extension is only for the BROJA-PID [12, 33]. There exists other PID measures that diverges from the BROJA-PID even in the bivariate case. The paper does not provide sufficient discussion on this. Further, the title of the paper is misleading, as Partial Information Decomposition is not just BROJA decomposition. The authors should consider revising the title and make this clear in the beginning of the paper. Also, if analytical expression is already available, why "computing"? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. In line 131, it is stated that the main focus of this work is to analyze the cases where (2) is analytically solvable. What is the definition of "analytically solvable"? 2. In section 4, the two systems are constructed in a symmetric manner. How do the results compare to [Niu and Quinn, 2020]? 3. Do the analytical solutions only hold for BROJA-PID or do they also hold for a wider range of PID measures? [Niu and Quinn, 2020] Niu, Xueyan, and Christopher J. Quinn. "Synergy and redundancy duality between gaussian multiple access and broadcast channels." 2020 International Symposium on Information Theory and Its Applications (ISITA). IEEE, 2020. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive endorsement of our work. We aim to improve our work based on the suggestions made by the reviewer. The answers to the specific queries of the reviewer are provided below: **Answer to Q1**: The definition we implicitly adopted for “analytically solving” equation (2) is that we are able to provide an explicit construction of the minimizing distribution $Q^*(M,X,Y)$ that minimizes equation (2). In all our proofs, we have provided an explicit construction of the minimizing distribution. We aim to appropriately modify the manuscript so as to clearly and explicitly define this concept. We apologize for any inconvenience/confusion it may have caused the reviewer. **Answer to Q2**: We compared the two systems proposed in Section 4 (defined on lines 194 and 202) with the results presented in Niu & Quinn ISITA'20. We found that there are subtle but significant differences between the systems we are analyzing and the systems analyzed in Niu & Quinn ISITA'20: 1. The main difference is that Niu & Quinn ISITA'20 analyzes the PID within a Gaussian Broadcast Channel (BC) with the PID within a Gaussian Multiple Access Channel (MAC) and obtains a duality between the PID terms of these *two* different systems, i.e., Synergistic Information (SI) of MAC has the same value as the Redundant Information (RI) of BC and vice-versa. In contrast, we are analyzing the PID of a *single* system consisting of a message (M) passed through two different BCs, i.e., $P(\mathbf{X}|M)$ and $P(\mathbf{Y}|M)$, where $\mathbf{X}$ and $\mathbf{Y}$ are vectors. The theoretical results for systems 1 and 2 in Section 4 can be thought of as the generalization of Property 1 in Niu & Quinn ISITA’20 for stable distributions, which can be used to compute the UI, RI, and SI w.r.t. $M$ between these two different BCs. 2. One can potentially construct an equivalent MAC model for stable distributions inspired by the BC-type models discussed in Section 4. However, it is not immediately apparent to us how to show the duality discussed in theorems 4.1 and 4.2 in Niu & Quinn’20 for these stable distribution versions of MAC and BC. Particularly, the proof of theorem 4.1 relies on the closed-form expressions of $I(X_1; Y |X_2)$ and $I(X;Y)$, which exist for Gaussian distributions but are not readily available for all stable distributions. Verdu Entropy’23 does provide a similar log(1+SNR)-type expression for the Cauchy distribution which could be potentially used to show similar duality results for the same, but we are not sure if the duality results would continue to hold for all stable distributions. Even among the stable distributions, the Gaussian distribution does tend to be quite special as the only continuous stable distribution with finite moments. **Answer to Q3**: Yes, our analytical solutions (theorem 1-7) hold for a wider range of PID measures. Particularly, our results hold for two broad classes of PID definitions: PID definitions satisfying assumption ($\*$) in Bertschinger et al. Entropy’14 and Blackwellian PIDs (see Appendix B-C of Venkatesh & Schamberg ISIT’21 for a deeper discussion on differences between these two families of PID definition). Some examples of PID definitions satisfying assumption ($\*$) in Bertschinger et al. Entropy’14 are the Williams & Beer’10 PID definition, the PID definition proposed in Harder et al. Phys. Rev. E.’13, and I-PID proposed in Venkatesh et al. ISIT’23. Similarly, an example of a Blackwellian PID is the $\delta$-PID discussed in Venkatesh & Schamberg ISIT’21. We also point the reviewer to Section III-F of Venkatesh et al. ISIT’23 which discusses a family of Blackwellian PID definitions for which our theorems are also applicable. We invoke the argument presented in the proof discussed in Section 4.2 of Barrett Phys. Rev. E.’14 to show that our results hold for any PID definition satisfying assumption ($\*$) in Bertschinger et al. Entropy’14. Barrett’s argument relies on the key observation that the UI calculated using BROJA-PID upper bounds the UI calculated using any other PID definition satisfying assumption ($\*$) in Bertschinger et al. Entropy’14. Consequently, if one of the UI atoms of the BROJA-PID is zero then the corresponding UI atom of the other PID definitions must also be zero due to BROJA PID upper-bounding them and the non-negativity of the PID atoms. Since the remaining PID atoms are calculated using equation (1) in our manuscript, it must be the case that the PID atoms calculated using any PID definition satisfying assumption ($\*$) must be the same as the PID atoms calculated using BROJA-PID whenever one of the UI terms of BROJA-PID is zero. By utilizing the definition of a Blackwellian PID, we can demonstrate that our results also hold any Blackwellian PID. For a Blackwellian PID, a UI atom is zero iff it is possible to construct the Markovian structure between the random variables (i.e., $M\rightarrow X\rightarrow Y$ or $M\rightarrow Y\rightarrow X$) while preserving their pairwise marginals (see Venkatesh et al. ISIT’23 for more details), which is indeed the case for our results. Consequently, theorem 1-7 can be used to show that all Blackwellian PID would have one of the UI atom as zero. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Given the modifications the authors promise, I have raised my rating to accept. I strongly encourage the authors to revise the title and incorporate the additional discussions, including your answers to Q2 and Q3, into the final version. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising their rating. We will incorporate the changes suggested by the reviewer.
Summary: The paper provides an analytical way of evaluating Partial Information Decomposition (PID) of the BROJA type (BROJA-PID). PID is a framework used in neuroscience and other fields to analyze how two sources of information interact to affect a target. The primary challenge addressed by the paper is the difficulty in computing BROJA-PID, which involves a complex optimization problem over probability distributions. This difficulty is avoidable when the sources and the target obey the Gaussian distribution. By considering the reason why this avoidance is possible, the authors clarify a sufficient condition (certain affine structures are assumed when conditioned on the target) where the analytical evaluation of PID is possible for a larger class of stable distributions. This result is generalized for convolution-closed distributions and a subclass of the exponential family, to show a link with data thinning and data fission. Furthermore, an upper bound of the PID for convolution-closed distributions, which is analytically evaluated and is applicable in generic situations, is derived based on the invented technique. Numerical simulation is conducted to demonstrate the efficiency of the proposed formula. Strengths: 1. Innovative Theoretical Contributions: The paper extends the known analytical PID results from Gaussian distributions to a broader class of stable distributions. This is a significant theoretical advancement as it provides new insights into the information structure of various well-known distributions. An analytical upper bound for PID for convolution-closed distributions is also useful for practical applications. 2. Practical Relevance: The results are particularly relevant to neuroscience and related fields, where PID is used to understand complex interactions in the systems. Weaknesses: 1. Accessibility: The paper is highly theoretical, which might limit its accessibility to a broader audience who may not have a strong background in statistics and information theory. 2. Limited Scope of Applications: While the paper focuses on extending analytical results to stable and convolution-closed distributions, it does not address the applicability of these results to real-world data extensively. The results rely on specific assumptions about the underlying distributions. The practical applicability of these results depends on how well these assumptions hold in real-world scenarios, and more discussions about this point are desired. Also, some more empirical validation on the real-world applications would enhance the impact of the present work. Technical Quality: 4 Clarity: 4 Questions for Authors: My questions are mainly about the practical implementation/application of the presented result: 1. How do you recommend handling situations where the underlying distributions are not exactly known and only empirical data is available? 2. Could you clearly state the situations where your analytical formulas have a clear advantage over numerical methods? 3. How do the theoretical results apply to high-dimensional data sets, and could you provide an example of such an application? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: I think the authors addressed the limitations of the work adequately. For improvement, it is fine to answer and resolve the above equations and weak points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review of our work. We have provided the replies to the queries raised by the reviewer below: **Reply to Q1**: Our theoretical results can be applied to empirical data by assuming that the data can be reasonably modeled by one of the distributions discussed in this work (e.g., Poisson, uniform, etc.). The validity of this assumption could be confirmed through, e.g., a goodness of fit test such as Kolmogorov–Smirnov test. After confirming the assumptions on the distribution of the data hold, the corresponding distribution parameters can be estimated, e.g., through maximum likelihood estimation. If the estimated parameters reasonably follow the assumptions of their corresponding system model, then we can use the corresponding theoretical result to infer that one of the UI terms in the PID is zero. Consequently, the problem of PID estimation can be reduced to estimating I(M;X,Y), I(M;X), and I(M;Y), which can be estimated by using plugin estimators or general-purpose mutual information estimators such as MINE (Belghazi et al. ICML’18) or Kernel Density Estimators (KDE) (Moon et al. Phys. Rev. E.’95). In case a convolution-closed distribution can reasonably model the data, the upper bound presented in Section 6 can also be used to approximately calculate PID atoms if the estimated parameters do not follow the assumptions of the systems discussed in theorem 1-7 but do follow the looser assumptions required by the upper-bound. **Answer to Q2 and Q3**: We combined the replies to questions 2 and 3 raised by the reviewer as the replies to both questions are related. First, theorems 2, 3, and 5 discuss the vector-case analysis, where X and Y are assumed to be of arbitrary dimensions, although M is still assumed to be univariate. Theorem 5, which discusses the multivariate Poisson system, is particularly relevant as the Poisson distribution is a common choice for modeling neural spikes. Many neuroscientific experiments record spiking activity from multiple neurons across different brain regions under different behavior conditions, resulting in high-dimensional datasets. The behavior condition can often be modeled as a univariate discrete random variable encoding different behavior conditions (represented as $M$ in our setup). The neuron spiking activity in different brain regions can be modeled using multiple Poisson random variables. E.g., the vectors X and Y can represent the activity of neurons in brain regions 1 and 2, respectively. For such systems, theorem 5 could be used to calculate the PID of neurons present in different brain regions with respect to the behavior variable $M$. *Advantage*: The main advantage of using theorem 5 and modeling neural data using the Poisson distribution is data-efficiency. As these datasets tend to be high-dimensional, general-purpose PID estimators must first estimate the high-dimensional distributions P(X|M) and P(Y|M) and then optimize over these estimated high-dimensional distributions to obtain the PID. This procedure can be data inefficient at high dimensions. In contrast, computation of PID atoms using theorem 5 under the Poisson system's assumptions only requires estimating the rate parameters of the different Poisson variables, which is more data-efficient at high dimensions. Note that the corresponding mutual information terms required for calculating PID can be estimated using plug-in estimators as they only depend on P(M) (already known by design of the experiment) and the estimated Poisson rate parameters. Note that to use our method, the data must follow the system model assumptions reasonably well. Our theoretical results could also be useful in calculating PID of fat-tailed distribution systems. Most of the existing PID solvers assume discrete or Gaussian distribution for calculating PID, both of which would be poor assumptions for modeling fat-tailed distributions frequently used to model financial data (Jansen et al. the review of economics and statistics ’91, Haas et al. ‘07). Pakman et al. NeurIPS'21 uses a copula method for estimating PID by optimizing an upper bound, which could potentially be used to estimate the PID of fat-tail systems. However, if the system model assumptions of our theorems hold, then our results can provide an exact PID of the fat-tailed systems instead of Pakman et al.'s approximate solution. We also want to point out a *practical benefit* of our theoretical results, which is providing numerous systems on which the performance of different numerical estimators can be benchmarked. Previously, the ground-truth PID was known for very few systems (primarily Gaussian or some hand-designed toy systems), limiting the evaluation of numerical PID estimators as these estimators could not compare their numerical estimates with the ground-truth. With our theoretical results, the numerical PID estimators can extensively test their performance on a large number of systems where the ground-truth PID is known or can be easily calculated. While not directly discussed in this work, another interesting avenue to explore using our theoretical results is to numerically estimate PID for more general system models by extending the method used in Venkatesh et al. NeurIPS’23. Venkatesh et al. NeurIPS’23 builds on existing analytical Gaussian PID results by assuming that Q(Y|M,X) of the minimizing distribution Q(M,X,Y) remains Gaussian for general Gaussian systems. This assumption can be seen as a special case of assuming Q(Y|M,X) of the minimizing distribution Q(M,X,Y) having the form $G(\epsilon\delta, (1-\epsilon)\delta, x)$ (see Sec. 5 of the manuscript) for convolution-closed distribution. Hence, there is potential for extending the numerical techniques developed in Venkatesh et al. NeurIPS’23 to other convolution-closed distributions. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: Thank you for the detailed response. I think those points will be useful for readers and should be included somewhere in the manuscript. I am anyway happy to support this paper for publication. --- Reply to Comment 1.1.1: Comment: Thank you for your support and insightful comments.
Summary: This paper provides analytical solutions for the Partial Information Decomposition (PID) (specifically, the BROJA PID) for a broad class of continuous distributions. The core proposition of this work lies on the relationship between stable distributions, Markov chains, and the minimisation conditions for Blackwellian PIDs (such as BROJA). In addition to a wealth of novel analytical results, the paper also provides an original and useful link to data fission and data thinning, as well as upper bounds for PID in a broader class of distributions. Strengths: This is a very strong paper with substantial theoretical results. It fills an important gap in the PID literature, which has so far been (almost, although not fully) confined to either Gaussian or discrete systems. The paper also links PID to other problems in statistics (data fission and data thinning), which grounds the results in a broader context. Although presentation could be improved, the authors do a reasonably good job at presenting a very mathematically involved subject while maintaining some degree of intuition. Weaknesses: Despite its title, the paper doesn't _really_ compute any PID analytically. To compute all PID atoms, one needs to calculate the 3 mutual informations in addition to the BROJA unique information. The authors do not show that these quantities can be analytically calculated for the same classes of distributions studied. Similarly, the authors only provide sufficient conditions for the BROJA-PID to be zero, but they do not compute it analytically for cases when it is non-zero. This is a very important caveat and, although I still think the paper makes a solid contribution, many of its statements and section titles are somewhat overstated. I'm aware of the space limitations, but I would still consider it a weakness that the paper doesn't provide much of an illustration of its results. This leaves readers with the impression that many results are within reach, but without any clear intuition of what these results actually are. I would encourage the authors to capitalise more on the benefits they claim, e.g. showing actual analytical PID values from fat-tailed systems. Technical Quality: 4 Clarity: 3 Questions for Authors: Why is the restriction of univariate M necessary? Theorem 1, one of the core results of the paper, has an important "if" clause in line 187. What is the intuition behind this condition? When does it hold? Please discuss and provide positive and negative examples. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors adequately discuss their limitations at length. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s positive review of our work and their constructive criticism which should improve the overall quality of our manuscript. We like the reviewer’s suggestion to show analytical PID values/expressions to illustrate our theoretical results. Inspired by this suggestion, we found a pleasing illustration of theorem 1 that utilizes the Cauchy distribution. Particularly, if $M\sim\text{Cauchy}(0,\gamma_M), P(X|M)\sim\text{Cauchy}(M, \gamma_X)$, and $P(Y|M)\sim\text{Cauchy}(M, \gamma_Y)$, then the RI term can be written as $\min\\{ \log(1+\gamma_M/\gamma_X), \log(1+\gamma_M/\gamma_Y)\\}$, which is the analog of the the Gaussian PID result of Barrett Phys. Rev. E.’14. Similarly, the non-zero UI term can be written as $\max\\{ \log(1+\gamma_M/\gamma_X), \log(1+\gamma_M/\gamma_Y)\\}-\min\\{ \log(1+\gamma_M/\gamma_X), \log(1+\gamma_M/\gamma_Y)\\}$. Obtaining the analytical expression of SI turns out to be harder due to I(M;X,Y) not having a “simple” closed-form expression, as P(Y|X) and P(X|Y) are not necessarily Cauchy distributions (in contrast with the Gaussian case where P(X|Y) and P(Y|X) continue to be Gaussian). Regardless, we will try to find informative illustrations that facilitate understanding of our theoretical results, e.g., by providing examples of simple closed-form PID expressions for the systems listed in Appendix B and C, wherever possible. **Answer to Q1**: It is necessary to restrict M to be univariate as many of the constructions provided in the proofs of theorems 1-7 do not extend to the case where M is multivariate. For example, lemma 10 used in the proof of theorem 2 only works for the case where M is univariate. Similarly, the construction of Q(X’|X) in the proof of theorem 4 only works when M is univariate which we illustrate through a counter-example: Consider the system with vector message $M=[M_1,M_2], P(X|M) =$ Poisson$(aM_1+bM_2)$, and $P(Y|M)=$Poisson$(cM_1+dM_2)$ with a>c and b<d. Then, the binomial thinning operation (used in the proof of theorem 4 to construct the Markov chain) can no longer be used to construct the Markov chain $M\rightarrow X\rightarrow Y$ as there exists no $0<p\leq 1$ such that $Y=X$ o $p$, since no $p$ can simultaneously satisfy $ap=c$ and $bp=d$. A similar situation occurs for convolution-closed distributions where we also rely on a dilation operator akin to binomial thinning. Intuitively, univariate M describes a special condition where one of the sources (i.e., X or Y) is always a noisier version of the other. Consequently, constructing Markov chains is comparatively easier for univariate $M$ as one can always construct the noisier source from the less noisy source by adding independent noise, dilating the signal strength, or both. For multivariate $M$, it is not always possible to consider one source as a noisier version of the other source which makes constructing the respective Markov chain (required for showing one of the UI terms is zero) more challenging or even potentially impossible. For example, in the Poisson counter-example system discussed above, we have a>c and b<d, in which case X has access to a less noisy version of $M_1$ but a more noisy version of $M_2$, so hence both X and Y should have access to some unique information about M (X through $M_1$ and Y through $M_2$). Hence, for multivariate M, it is not always possible to show the existence of a Markov chain to obtain the PID terms. However, it is possible to specify majorization conditions (e.g., something akin to a>c and b>d) for multivariate M, where one is still able to obtain PID terms through the Markov chain construction technique used in our work. For example, theorem 3 can be generalized to multivariate M using the full result of lemma 5 of Shang & Poor IEEE Trans. on Info Theory’12. Note that these majorization conditions are more restrictive as they guarantee one of the UI terms to be zero only for specific parameter values of the system compared to the PID results for univariate M, where one of the UI terms is always zero for all system parameter values. We are actively trying to derive these majorization conditions for our theorems, but these conditions are not straightforward to obtain as they require us to construct some matrix-generalization of binomial thinning and new matrix constructions for theorem 2. As a side note, Venkatesh & Schamberg ISIT’21 provides a nice discussion on why M being univariate is essential for jointly Gaussian systems and a corresponding majorization condition for Barrett’s univariate M-Gaussian result for univariate M. **Answer to Q2**: The “if” clause in theorem 1 accounts for different skewness of the stable family distributions. The parameter $\beta$ encodes the skewness of the stable distributions, where with $\beta=0$ indicates no skewness, positive $\beta$ denotes skewness towards positive reals, and negative $\beta$ denotes skewness towards negative reals. The intuition behind the condition is that the random variables X and Y, for general stable distributions, also convey information about the random variable M through their skewness in addition to the standard SNR interpretation. Hence, in order for Y to have zero UI about M, X needs to have sufficiently high SNR to overcome the additional information conveyed by Y about M by virtue of its different skewness compared to X. The level of SNR required by X in order to ensure that Y has zero UI is quantified by the inequalities given in the “if” clause of theorem 1. For example, let us assume that SNR of X is 2, SNR of Y is 1, $\alpha=1$, and $\beta_X=0$. Then, Y will not have any UI about M if its skewness parameter is $-½ \leq \beta_Y\leq ½$. Hence, in order for Y to have non-zero UI about M, its skewness must be either bigger than ½ or smaller than -½. --- Rebuttal Comment 1.1: Comment: Thank you for the response. While some limitations remain, I am happy to support this paper for publication. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments and supporting our work for publication.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful evaluation of our manuscript. We are grateful for the instructive comments and suggestions provided by all the reviewers that would invariably improve our work. Some common remarks by reviewer Ffcm and reviewer vKRJ regarding the presentation of our results seem to stem from us not explicitly defining what we meant by "analytically computing PID". To address these common comments, we will rephrase our manuscript to more carefully state our claims based on their suggestions, e.g., by explicitly acknowledging that all our theoretical results focus on the cases where one of the UI terms is zero, and discussing in more detail the PID definitions for which our results hold. The attached document contains the additional analysis on the upper bound suggested by the reviewer zuEu. Pdf: /pdf/f189932e9e136fcf24fd466facc63ec7d472e80b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ContextCite: Attributing Model Generation to Context
Accept (poster)
Summary: This paper proposes a simple and effective approach for attributing LLM generation to the sources in the context, where a set of random ablations and their corresponding effects in the model's probability of generating the response are modeled with a (sparse) linear relationship. The method archives impressive performance and also allows applications such as verifying model generations or denoising the sources in the context. Strengths: - The problem of attribution is very important, and the paper focuses on an understudied aspect of how different sources *cause* the model to generate what it generates. - The proposed method is simple and intuitive. Comparison with the baselines is thorough and clearly demonstrates its superior effectiveness. - The paper is generally well-written and easy to follow. Weaknesses: - The method may be a bit costly to run, given that the linear relationship needs to be re-learned for every (context, generation) instance, and the number of sample ablations also grows when the context scales. - If I'm not mistaken, the proposed linear relationship implicitly assumes that the sources in the context contribute in a rather independent manner to the model's generation. This may not work well when the generation involves complex interactions and reasoning over a large number of sources in the context (beyond just a few sentences as in, e.g., HotpotQA). For example, if the task involves some kind of logical deduction, missing a certain premise in the context could cause a snow-balling effect on how the model processes the remaining sources and scores its generation, which is very hard to model with a linear relation. Technical Quality: 3 Clarity: 3 Questions for Authors: For extracting query-relevant information from the context (Section 5.2), if some important sources are "lost in the middle" and overlooked by the model, then the proposed approach would give a very low score for them as they don't contribute to model generation. Then they also won't get selected during the extraction phase. How would this improve QA performance then? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address each question individually below. **Effect of number of context sources (and sparsity) on sample complexity.** We agree with the reviewer that computational cost is a significant limitation of ContextCite. However, we would like to point out that the number of sample ablations needed often does not actually grow quickly as the context scales. As we discuss in Appendix C.1.1, the number of ablations required to learn an accurate surrogate model only grows linearly in the number of *relevant* sources (rather than growing linearly in the total number of sources). In Figure 3(a), we observe empirically that the number of relevant sentences often remains small ($<10$) even when the context consists of hundreds of sources. This suggests that a small number of context ablations may suffice even for very long contexts (in practice, we find that using just $32$ context ablations often works well). We would also like to point out that while ContextCite is expensive to run, it can be applied in a post-hoc manner. A user can interact with an LLM as usual and can only choose to apply ContextCite and incur this cost in cases when they are suspicious of a model’s generated response and would like to see attributions. **Effect of linearity in context attribution.** We agree with the reviewer that a linear model may not handle complex dependencies between sources, such as those that might arise when reasoning over many sources. We do find it promising that a linear model works well for Hotpot QA, in which every question explicitly requires combining information from two or more documents. That said, proper attribution for more sophisticated reasoning may require more complex surrogate models (or interaction terms for the linear model), which is an interesting direction for future work. **Why does selecting only relevant sources help?** The reviewer makes a good point that if the model uses the wrong sources to generate a response, then selecting just these sources is unlikely to help. To understand why relevant sources can actually help, we consider two failure modes associated with long contexts: 1. The model identifies the wrong sources. Selecting these sources will *not* help. 1. The model identifies the correct sources, but *misinterprets* information because it is distracted by other irrelevant information. Selecting these sources *can* help because a shorter context including these sources could help. We intend to include this discussion of why selecting relevant sources helps in the next revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response, which partially addresses some of my concerns. I'll keep my earlier evaluations.
Summary: The current paper introduces the task of context attribution, which aims at attributing a generated response of an auto-regressive LM back to the sentences in the input contexts. Target at this task, ContextCite, was proposed to predict which piece of context changes the probability of the generated response most. Together with ContextCite, this paper also designed two evaluation metrics based on the probabilities of the generated responses. At length, the experiments on two small LLMs show the effectiveness of ContextCite. Additionally, this paper provides several examples of applications of ContextCite. Strengths: - This paper introduces the task of context attribution, which, AFAIK, is the first time this important and interesting task has been introduced. The task is well-motivated and well-defined. One could expect that this paper could initiate a new line of work on interpretability. - Along with the proposed task, this paper proposed a solution, namely ContextCite, building on a simple but effective idea. The subsequent evaluation suggested its promising performance. - One thing that I like the most about this work is that in addition to the task and the model, this paper also introduces two well-designed evaluation metrics as well as a list of potential applications of ContextCite. Weaknesses: Though I thought this paper was already of good quality, I still have three major concerns regarding the task, the model and the evaluation. First and foremost, in my opinion, the current definition of the task is still highly limited and can be largely extended. For example, the current task only focuses on the attributions with responses to the whole generated responses, but for tasks like summarisation (which was also examined in this work), it is also important to attribute each token in the response to, for instance, route the hallucination. I am personally very OK if the solution of such an enhanced task is not provided, but I am very inclined to see that the related discussions about the possible extensions of the task can be included. Second, regarding ContextCite, though the idea of making use of the probability change looks promising, after due consideration, I have a strong feeling that its relation with which parts in the input contribute more to the response is not fully deterministic. In my opinion, the probability of a response could be reduced more if, as pointed out in the paper, the omitted piece contributes more to the response or only if the resulting context after removing the sentence is way more incoherent. One example of such a possibility is given in Appendix C2, but the discussion seems not to be sufficient. This also made me think that the model only suits tasks whose output relies on the very long input text (e.g., summarisation and document-based QA, as omitting one sentence would not highly influence the coherency). Finally, the evaluation was done on three very related tasks and two tiny language models. This makes me somewhat question whether the solution is generalizable to other NLP tasks (that are open-ended, for example), other larger models and other prompt designs (e.g., in-context learning and chain-of-thought). Technical Quality: 2 Clarity: 3 Questions for Authors: Can you provide some examples of the selected sentences and hit on the distributions and characteristics of the selected sentences? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: See my points in the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address each question individually below. **Token-level context attribution.** Our task actually does consider attributing arbitrary selections from the response (including individual tokens). We discuss this in Section 2.3. When we evaluate ContextCite in Section 4, we evaluate the attributions of individual sentences in the response: “given an example, we (1) split the response into sentences using an off-the-shelf tokenizer and (2) compute attribution scores for each sentence”; see lines 202-203. We have also included a figure in the global response illustrating an example of a ContextCite attribution on Llama-3-70B for a particular generated statement. **Effect of ablations on context coherence and model output.** The reviewer makes a good point that a context source might not directly contribute to a response, but ablating it might influence the response indirectly by making the context incoherent. For example, consider the sentences: “John lives in Boston. Charlie lives in New York. He sometimes visits San Francisco.” In this case, “He” refers to Charlie. However, if we ablate just the sentence about Charlie, “He” will now refer to “John.” So, if we attribute the generated statement “Charlie sometimes visits San Francisco,” then we’ll get two sources: (1) “Charlie lives in New York” and (2) “He sometimes visits San Francisco.” While, by some definition, (2) is the actual source, having both (1) and (2) as sources is also reasonable behavior in our view. So, we do not view this issue as a significant limitation. Extending ContextCite to account for such dependencies (e.g., via a non-linear surrogate model) is an interesting avenue for future work. **Experiments on larger models and other NLP tasks.** - **Larger models:** Our experiments in the paper evaluate ContextCite on models ranging from 4B to 8B parameters. To showcase the scalability of our method, we have included an additional figure in the global response, which illustrates a ContextCite attribution for Llama-3-70B. Furthermore, in Appendix B.3, we show that our experiment from Section 5.2 (selecting query-relevant context sources) improves results on LLama-3-70B as well. - **Other NLP tasks:** The tasks we use to evaluate ContextCite are reasonably diverse, including summarization (CNN DailyMail), knowledge extraction (TyDi QA), and reasoning using information from multiple documents (Hotpot QA). Still, we agree with the reviewer that certain tasks/prompt designs may be less suitable for ContextCite. As we note in Appendix C.2, if the model outputs a chain-of-thought reasoning chain, then it is quite possible that only the initial facts would yield proper attributions. For example, for the output “I have 3 eggs. You have 4 eggs. Together, we have 3 + 4 = 7 eggs.”, ContextCite would be able to meaningfully attribution “I have 3 eggs” and “You have 4 eggs”, but not “Together, we have 3 + 4 = 7 eggs” because this statement follows from the previous part of the response and not directly from the context. Extending the core task of context attribution to chain-of-thought style reasoning chains is an interesting avenue for future work. **Examples of attributions.** We’ve provided an example attribution in the global response, and will include additional examples in the next revision. --- Rebuttal Comment 1.1: Title: Thanks for your response. Comment: Thanks for your response. Though I still think what I wrote in my review are still limitations of this work, I agree they are minor points. I'll keep my recommendation of an acceptance.
Summary: In an attempt to understand how language models leverage context information in its generation, this work studies contributive context attribution. Following the recent trend in attribution research, authors propose to use (sparse) linear models to learn the importance of each unit/sentence in the context, and demonstrate its effectiveness using top-k log-probability drop and linear datamodeling score (LDS) tests. Lastly, authors explore two potential applications of contributive context attribution, namely generated statement verification and query-relevant information selection. Strengths: The paper is very clearly written, well-structured, and easy to follow: - The problem of contributive context attribution is relatively new. However, the authors provided all the necessary contexts for the problem, and thus I was able to understand the problem setup easily. - ContextCite seems to achieve promising attribution accuracy on both top-k log-probability drop and LDS tests, especially in comparison to baseline methods. - Their context selection experiment (section 5.2) results also look promising. If we make the analogy between data attribution and context attribution, this probably mirrors a data selection experiment. It's interesting to see that context selection generally leads to the performance improvement. Weaknesses: - While (contributive) context attribution looks to be new on the surface, their problem setup, method (i.e. ContextCite), and evaluation are mostly borrowed from existing data attribution literature. While authors provided proper references to those literature, I still believe the technical contribution largely lacks novelty. I wasn't able to grasp whether authors introduced any major modifications to adapt existing data attribution methods (i.e. datamodel) and evaluations (i.e. brittleness and LDS) to context attribution. If I missed something, please correct me. - If my understanding is correct, they prompted GPT-4 to generate the (proxy) gold label for their generated statement verification experiments (section 5.1). This was confusing because the authors noted in line 84-85 that prompting the model for citations is generally considered as corroborative attribution. To be precise, the authors asked for the correctness of each sentence instead of for citations here, but it was a bit odd for me to use the prompting technique to evaluate contributive context attribution. Technical Quality: 3 Clarity: 4 Questions for Authors: - How does the context length affect the number of required forward passes with randomly sampled context information? Naively thinking, if you have n parameters in your linear model, you may need n samples to properly fit the linear model (without a sparsity assumption). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors adequately addressed the limitations in Appendix C. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address each question individually below. **On technical novelty and connections to data attribution.** We agree with the reviewer that our work leverages the data attribution literature (e.g., datamodels, LDS); we acknowledge and discuss these connections in our paper. That said, understanding how a model uses information presented in its context is different (both conceptually and mechanically) from understanding how a model uses its training examples. Furthermore, some of the applications enabled by context attribution (e.g., helping verify correctness, see Section 5.1) don’t have clear data attribution analogues. Hence, despite the connections, we believe that it is valuable to study context attribution as its own task. We also believe that it is a valuable empirical finding that a linear surrogate faithfully models how a language model uses its context (especially given the dependencies between sentences). A priori it is not clear that this design choice from the data attribution literature would work well for context attribution too. **Confusion over corroborative attribution and usage of GPT-4.** We would like to briefly clarify: the goal in Section 5.1 is to use the contributive attributions estimated via ContextCite to help verify the correctness of a generated statement. Our intuition is that if the contributive sources (i.e., sources which *cause* a model to generate a statement) do not also *entail* the generated statement, then the generated statement might be incorrect (because the model may have misinterpreted the sources or hallucinated). We use GPT-4 only to assess whether each generated statement is actually correct and not for any sort of attribution. The reviewer is right that verifying correctness is closely related to corroborative attribution. In Section 5.1 though, we only focus on exploring whether *contributive* attribution methods (like ContextCite) can help in verifying the correctness of generated statements. **Effect of number of context sources on sample complexity.** The reviewer makes a good point that in general, if we have $n$ sources, we require $O(n)$ forward passes to learn an accurate linear surrogate model. As we discuss in Appendix C.1.1, in the case where the ground-truth model is sparse, however, we actually only require $O(k\log(n))$ forward passes where $k$ is the number of non-zero entries in the ground-truth model. In Figure 3(a), we illustrate empirically that even when the context is very long (hundreds of sources), the number of *relevant* sources (i.e., sources whose removal causes a probability drop beyond some threshold) is generally small ($<10$). As a result, in practice, just $32$ forward passes are enough to learn an accurate surrogate model even when there are hundreds of sources.
Summary: The authors of 17808 formalizes the problem of context attribution for language models. That is, identifying which parts of the input context caused a model to generate a particular output. The authors propose ContextCite, a method that learns a sparse linear surrogate model to approximate how ablating different parts of the context can impact the model's output probability. Authors claim several contributions, including 1) Formalizing the official task of context attribution and evaluation metrics, 2) Developing ContextCite, a simple and scalable attribution method, 3) Demonstrating ContextCite outperforms baselines on attribution quality, and 4) Showing applications in verifying generated statements and improving response quality. Strengths: - Important problem formulation: Context attribution addresses a key challenge in understanding and improving llm behavior / use cases esp. faithfulness and interpretability. - Comprehensive experiments: Right metric, datasets, and models. Comprehensive experiments are provided. - Case study -- Two valuable downstream applications are addressed - verifying generated statements (5.1) and improving response quality (5.2). - Clear writing and presentation -- Overall I find the presentation of the paper is well-structured and easy-to-follow. Weaknesses: - As noted in Appendix C.2, the current ablation approach of removing sentences can break dependencies between sents. This limitation could be explored a bit further in the paper, but it's understandable if there's time constraint. - A little more theoretical justification for why the linear surrogate model works well would be better. Technical Quality: 3 Clarity: 4 Questions for Authors: - While the efficiency and simplicity of linear surrogate models are acknowledged, did you try more sophisticated surrogate models beyond linear regression in your early exps? If so, how did they look like? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors provide a good discussion of limitations in Appendix C.2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address each question individually below. **Justification for linear surrogate modeling.** Our justification for using linear surrogate models for context attribution is purely empirical. A priori, we do not see any theoretical reason why linearity is the “right” design choice for context attribution. That said, across NLP tasks (summarization, knowledge extraction, reasoning over multiple documents), we empirically observe that a linear surrogate faithfully models how language models use their context (i.e., the surrogate model’s predictions match the actual log-probability drops, see Appendix B.1). Furthermore, by inducing sparsity, we are able to learn accurate linear surrogate models efficiently (with just $32$ context ablations, even when the context consists of hundreds of sources). **On non-linear surrogate modeling and sentence-level dependencies.** We agree that extending ContextCite to work with non-linear surrogate models is a promising avenue for future work. As the reviewer points out, surrogate model classes that factor in interactions among context sources (e.g., decision trees, linear models with interaction terms) could naturally account for the dependencies between context sources. Moreover, the expressivity of non-linear context attribution could help in capturing the effect of context ablations on model generations more accurately. In this work, however, we did not look into nonlinear variants of ContextCite for two reasons: First, as per our evaluation in Section 4, sparse linear surrogate models already result in accurate and sample-efficient context attributions. More sophisticated surrogate models may require many more context ablations to learn accurately. Second, linear models directly yield an attribution score for each source, making them interpretable. It would be less clear how to assign an attribution score to each source if the surrogate model were, say, a decision tree.
Rebuttal 1: Rebuttal: We thank all reviewers for their helpful feedback, which we think have highlighted areas where we could have been clearer. We have responded to each reviewer individually in detail, and we use this comment to highlight the additional experiments we have conducted in our appendices as well as in response to reviewer concerns. **Paper contributions.** We would like to reiterate the three main contributions of this work: 1. Introduce the task of context attribution, i.e., understanding how a language model uses information provided in its context when responding to a query. 1. Present ContextCite, a simple and scalable method for context attribution. 1. Explore two applications of context attribution: (i) help verifying factuality and (ii) selecting query-relevant information to improve response quality. **Additional experiments in response to reviewers’ questions.** In response to the reviewers’ feedback, we perform three new experiments, as outlined below. We will add these experiments to the next revision. - Scaling ContextCite to large-scale language models (Llama-3-70B) - Applying ContextCite with words as sources (instead of sentences) on a reasoning benchmark (DROP) - A new application: detecting adversarial poisons in (backdoored) long contexts **Attached PDF.** We’ve also attached an example of a ContextCite attribution of Llama-3-70B for a randomly sampled example from the CNN DailyMail news summarization task. We include this example to illustrate that ContextCite scales to larger models and can be used to attribute arbitrary selections from the generated response (not just the whole response). Pdf: /pdf/54f8405c285aa14d93fe5c79eb66c66ec4718387.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Amortized Active Causal Induction with Deep Reinforcement Learning
Accept (poster)
Summary: The authors propose CAASL, which amortizes an intervention policy in the setting of causal structure learning. They apply this algorithm in the setting of a synthetic environment and a gene expression simulator. They use AVICI as a reward model to estimate an approximate posterior over the graph’s adjacency matrix. The RL algorithm is SAC, and the policy and value networks are transformers. On their two environments, CAASL outperforms several baselines (in terms of uncovering the true adjacency matrix) and performs better in out-of-distribution settings. Strengths: The authors combine several lines of work to tackle the setting of causal structure learning: soft actor-critic, AVICI, transformer policies. From what I can tell, this is a novel combination of these methods. These results seem reasonably significant, enabling fewer interventions to obtain comparable adjacency matrix estimation accuracy. The paper itself is fairly clear in terms of writing and notation. The empirical analyses are clear and helpful, exploring in-distribution vs. out-of-distribution generalization and ablations. All plots are clearly labeled. Weaknesses: The authors put considerable effort into explaining the setup of causal structure learning, etc. However, perhaps due to my relative unfamiliarity with this area, the particular details of the setup were still unclear to me. Figure 1 attempts to present an overview of where their method fits into a causal structure learning pipeline, however, I would have instead (or in addition) preferred to see how CAASL links up with AVICI, the simulator, etc. From Figure 1, we only see the amortized intervention policy without any of the other machinery. Again, this may be due to my unfamiliarity with this area, but the novelty of CAASL was unclear to me. For instance, how does AVICI estimate the intervention policy? Is this not an applicable baseline? Is the main benefit of CAASL to improve the efficiency of policy estimation over AVICI? What are the natural baselines to consider in comparison with CAASL? In the experiments, the authors present two other intervention baselines, but these do not perform well. Is the benefit, then, some type of accuracy/performance? The experiments seemed slightly toyish, and the baseline comparisons did not ease my concerns. The most performant baseline is a random intervention policy. Are we to believe that the other baselines are actually *worse* than random? And are these problems simple enough to just use random search? Further, as the data dimensionality increases, CAASL is not able to outperform the random baseline. I would have, naively, expected the opposite, in which amortizing the intervention policy should yield an even larger improvement as the problem becomes harder. Technical Quality: 4 Clarity: 3 Questions for Authors: Why does the random intervention policy outperform the other baselines, and why does CAASL revert to this performance as the dimensionality increases? Have previous works explored the environments used here? I.e., are these standard benchmarks? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the detailed feedback. We answer your questions and concerns below: - "preferred to see how CAASL links up with AVICI, the simulator, etc": Thanks a lot for the great suggestion. For the camera-ready version, which provides an additional content page, we will include another figure which will highlight the individual components of the CAASL training procedure. We would like to clarify that AVICI is an amortized structure learning framework and does not itself do intervention design. We use AVICI to design rewards for our intervention design policy, which can be trained with a simulator. - "Difference between AVICI and CAASL": We would like to clarify that AVICI is an amortized causal structure learning algorithm and CAASL is an intervention design policy, and both are addressing different problems. In particular, AVICI predicts (distribution of) causal graphs given any dataset, and CAASL predicts the most informative intervention to perform given experimental history. We use the prediction of AVICI to design rewards for training the intervention policy. The benefit of our method over all other applicable baselines is that we can not only achieve amortized intervention design without parameter updates, we also do not require any intermediate posterior estimation. - "The experiments seemed slightly toyish": Compared to prior work, the experimental settings we consider are some of the more challenging ones when it comes to the problem of active causal structure learning. If there is any specific concrete experimental setting that you believe could improve our work, we are happy to run those experiments and include them. - "The most performant baseline is a random strategy": In active causal structure learning, random strategy is a very well regarded baseline due to its strong empirical performance. All the related works listed in the paper do compare with random, and its competitive performance can also be seen in prior works. We would like to clarify that the reason random baseline is competitive is not because the problems are simple, rather the opposite, that they are in fact hard. Other baselines like DiffCBED use an explicit posterior model to predict interventions. Getting an explicit causal graph posterior is a hard problem in itself, even for very simple settings, which in turn affects the performance of other baselines as compared to random. - "as the data dimensionality increases, CAASL is not able to outperform the random baseline": Note that the dimensionality increase experiments are OOD experiments, i.e. CAASL has never seen any data with that dimensionality before during training. Hence, more the dimensionality difference as compared to the dimensionality during training, the more is the distribution shift. With more distribution shift, CAASL is expected to slowly degrade in performance, as is expected of other ML/ RL methods. - “Have previous works explored the environments used here? I.e., are these standard benchmarks?“: Synthetic environments are very common across both causal structure learning and active causal structure learning. GRN environment SERGIO has been used in AVICI for structure learning before, but not in active causal structure learning. This is due to the fact that SERGIO is a very challenging environment for the existing baselines, and for most baselines it is not applicable due to no likelihood. We hope that addresses your concerns and questions. We would be glad to answer any further questions you may have. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your responses to my questions. They have addressed some of my concerns. As the question of comparison with the random intervention strategy also came up in reviewer o26x's review, I will wait to update my score until the reviewer discussion period. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks a lot for the feedback. We have further provided clarification to random intervention strategy to o26x reviewer. If you have any questions in particular that requires further clarification, we are happy to answer them as well.
Summary: The paper proposes to use RL to train policies that can adaptively select interventions/experiments for obtaining data to do active causal structure learning. The approach amortizes policy training by training the policy on synthetically generated data, an approach that has shown some promise in the amortized training of causal structure learning algorithms. These pretrained causal structure learning inference models are used as the reward function in the training of the policy used for experiment design. The policy takes in a dataset and outputs an experiment (intervention on the casual model), and uses a network architecture that bakes in the relevant invariances/equivariances. Evaluation is done on synthetic environments including a gene regulatory network environment. Strengths: - The approach seems reasonable and I think members of the NeurIPS community will find the experiments evaluating this approach interesting - The experiments in section 5.1, particularly the results on OOD performance are encouraging - The methodology is generally clearly presented - It is a novel and interesting combination of the ideas in [6], [36] Weaknesses: - Section 5.2. I think more information needs to be given to interpret the results, perhaps with additional plots showing a measure more easily interpretable than 'Returns'. My interpretation from the plots currently presented in the main paper in section 5.2 is that the proposed method technically improves over a random baseline, but by such a small amount that the improvement is practically insignificant - Section 5. It would be helpful to show experiments demonstrating that the approach actually converges to learning the true graph if enough experiments are done, since even simple approaches like random have this property. - I think more background needs to be given on what the baselines in the experiments are and how they are run. Both perform much worse than a random strategy, which is surprising. It also seems that both baselines are batch approaches (at each iteration select multiple distinct interventions), while the approach here is always using batch size 1. How are / do the baselines need to be modified for the batch-size 1 setting and why do they perform so poorly, even in-distribution in Figure 3? - Since greedy approaches are so common in this field, I think a natural ablation would have been to train the RL policy with an effective horizon of 1 (effectively a greedy/bandit policy trained under the same methodology presented). This can be done by rolling out the policy for the full horizon $H$, but then updating the policy by considering each timestep as it's own 'episode'. This would answer the very important question of whether this extra RL/adaptive machinery is really necessary compared to just looking one experiment ahead. I couldn't find what the horizon length used during training of the policy was. Apologies if I just simply missed this. Technical Quality: 3 Clarity: 3 Questions for Authors: - Section 5.1. What is the intervention? I got from the appendix that it is a do() intervention but for each method how do you select what value to set the intervention targets to in the case of Random and SS Finite? - Line 88: "where the first column corresponds to one-hot encoding of whether a particular variable is intervened or not". This made me believe you only deal with singleton intervention targets, but reading further it seems like the approach also handles and makes use of multi-target interventions. I think it is not 'one-hot' in your case, but confirmation from the authors would be appreciated. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The method requires inputting an entire dataset into the policy. It would be helpful to have experiments and discussion that reflect the ability for the approach to scale not just over graph size (included already) but over dataset size/number of rounds of experiment. Besides this I have no complaints about the transparency of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the detailed feedback. We address your questions and concerns below: - “Section 5.2… More information needs to be given to interpret the results”: In Appendix, Fig 14-18 presents results with regards to metrics other than returns. We argue that returns is an interpretable metric, as it is an indicator of the test-time rewards of the policy for unseen environments. Regarding the practical significance of the performance of the approach wrt random, we would like to emphasize that our approach can handle unknown likelihoods and high amounts of missing data, which has not been studied in active causal structure learning before. The results that we have obtained can be further improved by also updating the reward model AVICI as more interventional data is acquired during training. We hope that through this contribution, further active structure learning methods aimed at practical scenarios can be developed. - “Section 5. It would be helpful to show experiments demonstrating that the approach actually converges to learning the true graph.” Thanks a lot for the suggestion. While this is a desirable property of any method to converge to the true graph when acquiring more interventional data, we would like to highlight that the convergence property is also influenced by the structure learning algorithm that is used to evaluate the policy. In our case, we use a pre trained AVICI model, which does not have necessarily the property that it converges to the true graph when given more data. Hence, even a random policy, when evaluated with AVICI also need not converge to the true graph. - “More background regarding baselines”: The chosen baselines are the state-of-the-art in Bayesian Optimal Experimental Design for causal discovery which can select multi-target interventions. For the intervention value, it is selected at random between the range [-1,1] for random strategy. For SS-Finite, we set the intervention value to 5, as suggested in previous approaches like [52]. - "Baselines are batch strategies and perform poorly": Indeed, both of the baselines are batch strategies. We set the batch size to 1 in order to make a fair comparison. Most baselines work well when the underlying graphs are very sparse (for instance when the average edges per node is less than 1). However, in the settings we consider, the sparsity of graphs is set that it is more realistic in terms of real-world applications, wherein a random policy is still competitive. In addition, baselines still use an approximate posterior, thereby making the induced policy at times worse than random. We will make these details clear in the revision. - “Possibility of training a greedy policy”: Thanks for bringing this great point. In figure 2, one can see the adaptiveness of the policy, esp. wrt optimal intervention values, at play. We agree that if adaptiveness is not necessarily a requirement, one can just train the policy with a time horizon of 1. In such a setting, we expect that our approach still offers advantages in terms of amortization through design space symmetries. For the experiments, the horizon is always set to 10, both during training and evaluation. - “where the first column corresponds to one-hot encoding of whether a particular variable is intervened or not”: We apologize for the poor choice of wording. By one-hot encoding, we mean to say the corresponding vector is a {0,1} vector. We willl make this correction of terminology to make it clear that we select multi-target interventions. - “Scalability wrt dataset size and number of experiments”: With regards to dataset size, one can scale it depending on the compute availability, by performing data parallel training approach. With regards to scaling to number of experiments, our method might show limitations as any baseline RL algorithm does not perform well over long horizons. However, as noted to the other reviewer (ydzf), it is indeed interesting to look at OOD performance when the number of experiments during testing changes. We will include this experiment for the camera-ready in appendix. We hope that addresses your concerns and questions. We would be glad to answer any further questions you may have. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you to the authors for their response. I kept my score the same. > Regarding the practical significance of the performance of the approach wrt random, we would like to emphasize that our approach can handle unknown likelihoods and high amounts of missing data, which has not been studied in active causal structure learning before I don't think this really addresses the point since the random strategy can also handle all these novelties. The performance compared to random on SERGIO seems like a very significant limitation that is glossed over in both the paper and the general Openreview discussion. > Thanks for bringing this great point. In figure 2, one can see the adaptiveness of the policy, esp. wrt optimal intervention values, at play. ... In such a setting, we expect that our approach still offers advantages in terms of amortization through design space symmetries. I think showing that an adaptive strategy is worth pursuing can only be done by showing that it is actually performing better than the ablated non-adaptive strategy (eg a bandit/setting time horizon to one). It doesn't really matter that the strategy proposed indeed ends up being adaptive, if it doesn't turn out that it is helping. This seems like a very fundamental experiment being missed when proposing an adaptive method. --- Rebuttal 2: Comment: Dear reviewer, Thanks a lot for the comments and the feedback. >I don't think this really addresses the point since the random strategy can also handle all these novelties. The performance compared to random on SERGIO seems like a very significant limitation that is glossed over in both the paper and the general Openreview discussion. We would like to highlight that we perform only 10 experiments with 50 initial observational samples in this setting, which might also be less samples for the AVICI model (which is used for evaluation and calculating the returns) to show significant improvements. An alternative is to train a policy which is either a batch of designs, or as mentioned in the rebuttal, by increasing the number of experiments. Of course, it is practically harder to train a policy which has these properties with RL, which can be counted as a limitation. Another point to highlight is that the existing methods for (Bayesian) active causal structure learning evaluate exclusively on synthetic data, where we perform significantly better. Besides, all the results are provided with 95% confidence intervals over 100 random environments, including for the SERGIO environment. The improvements we see in SERGIO, though not as substantial as that for the synthetic environment, is still noteworthy given that no other active causal structure learning algorithm is applicable in such a setting. Hence, our contribution should be viewed as a step forward in the ongoing discussion about advancing active causal structure learning. >I think showing that an adaptive strategy is worth pursuing can only be done by showing that it is actually performing better than the ablated non-adaptive strategy (eg a bandit/setting time horizon to one). It doesn't really matter that the strategy proposed indeed ends up being adaptive, if it doesn't turn out that it is helping. This seems like a very fundamental experiment being missed when proposing an adaptive method. Theoretically, as shown in Greenwald et al. 2019 [23] and Choo and Shiragur 2023 [13], performing interventions that are adaptive can be more sample efficient. Empirically, given the limited space, we focus our experiments on studying the amortization and generalization properties of the transformer based policy, which forms the core of our contribution. However, we are prepared to add the Bandit setting/ setting the time horizon to one for the revision. We have also provided the code, which is compatible to train the policy for time horizon one, and we will make it available on acceptance. --- Rebuttal Comment 2.1: Title: reply by reviewer Comment: I am going to raise my score to a 5 under the assumption that the authors will include some of this discussion for the results in the SERGIO experiments and make it clear in the text that the improvements in the SERGIO experiment are either very modest or not existent (seems to depend on which metric is used). I think right now the slightly opaque metric of 'returns' in the main text plots and lack of a clear commentary on these results gives an inflated sense of how well the approach is doing on SERGIO. I agree that the performance on synthetic benchmarks looks good. > Theoretically, as shown in Greenwald et al. 2019 [23] and Choo and Shiragur 2023 [13] My understanding is that these citations are defining adaptive as whether all experiments are done in parallel, or done sequentially. Our present discussion and my original comment is really on greedy vs non-greedy (where I think we've both had the confusion of using the word adaptive since my original comment), which is a different property of the algorithm: whether the planning horizon is 1 vs >1. I don't think these citations comment on greedy vs non-greedy. --- Reply to Comment 2.1.1: Comment: Dear reviewer, Thanks a lot for your comments and for increasing the score. > I think right now the slightly opaque metric of 'returns' in the main text plots and lack of a clear commentary on these results gives an inflated sense of how well the approach is doing on SERGIO. I agree that the performance on synthetic benchmarks looks good. We will include the comments about the limitations of using the AVICI model for evaluation and results of SERGIO in general in the revision, along with the other changes as highlighted in the common rebuttal section. In addition, a proper commonly agreeable metric for Bayesian causal structure learning is missing (a recent ICML paper [1] has brought this issue up in more detail), which is why we have presented multiple metrics, among which one of them is the expected number of correct edges (returns). > I don't think these citations comment on greedy vs non-greedy. Thanks for the clarification. Indeed, these citations do not comment on greedy vs non-greedy. As mentioned before, we will add the Bandit setting/ setting the time horizon to one for the revision which would illustrate the greedy vs non-greedy. The provided code is also compatible to train the policy for time horizon one. References: [1] Mamaghan, Amir Mohammad Karimi, et al. "Challenges and Considerations in the Evaluation of Bayesian Causal Discovery." In ICML (2024).
Summary: **Update after rebuttal:** I personally think that the authors have sufficiently addressed all criticism. Adding an extended discussion regarding the strong performance of a random baseline in many domains is a good idea and would situate the work even better. I remain in favor of accepting the paper, and have raised my confidence after taking into account all information. The paper aims at the problem of designing efficient intervention strategies for causal induction. In causal induction, the goal is to infer an underlying structural causal model from observational and interventional data - and while this inference problem in itself is hard and a very active area of research, the problem of how to optimally choose interventions given a limited intervention-budget is of great importance in practice. Proposed solutions range from evolutionary and uncertainty-based active learning approaches to reinforcement learning, which is the method this paper pursues. The paper implements the intervention strategy via a (history-based) transformer that is trained via off-policy RL (via soft actor-critic) and is capable of incorporating some reasonable inductive biases via its architecture (permutation equivariance via attention). The trained policy is referred to as an amortized solution, since after training, a single forward pass determines the next intervention (without any parameter updates or RL). The paper also designs a novel reward function that is based on AVICI, a previously published transformer-based solution to obtain an amortized predictor (posterior) over the structural causal model, given a history of interventions and observations. Putting all of this together, yields a policy for interventional strategies that well aligns with practices in modern machine learning, and uses transformers’ great empirical generalization qualities to do the heavy lifting. Accordingly, out-of-distribution performance and generalization are a concern (since theoretical guarantees are scarce or vacuous) - but as the paper shows, the method generalizes remarkably well to non-trivial distributional shifts on synthetic data (10 variable linear SCMs with additive homoskedastic Gaussian noise for training) and data from a single-cell gene regulatory network simulator (which simulates realistic measurements of gene expressions including noise and significant amounts of missing values). On-distribution and mainly out-of-distribution experiments in both domains show promising performance and good generalization of the method. Strengths: * Very well written paper, with good introduction for non-domain experts, making the paper very accessible to a wide NeurIPS audience. * Careful combination of previously published approaches with a novel intervention policy parametrization (based on a transformer) and a novel formulation of a reward function (based on a previously proposed method for amortized causal inference). The focus is on forward-pass simplicity and efficiency, and using transformer-based architectures throughout to achieve good generalization of amortized solutions. * Experiments show good performance, and most of the attention is devoted to evaluating out-of-distribution performance under various OOD conditions, which greatly supports the generality and reliability of the results. Weaknesses: * Experimental ablations investigate the data-generating distribution / ground-truth, but not the network architecture and various design choices (such as, e.g., the network size, the use of tanh vs. sigmoid vs. ReLU, the sensitivity to the underlying AVICI model and its training regime, …). It would be nice to see some ablations to empirically justify the design choices, but I do consider the OOD experiments more important for publication and am pleased to see a fairly thorough investigation there. * The discussion around amortization and how it relies on generalization to “just work” could be expanded a bit to make the paper even more broadly accessible. * Though the paper is very clear about which contributions are novel compared to what is taken off the shelf, it may help to have all the novel contributions in one place (e.g., end of the intro). **Verdict:** Though I am not a domain expert on causal inference for gene regulation, I greatly enjoyed reading the paper - it is very well written. The great introduction and background section makes the paper easily accessible to a wide ML audience. The method itself is somewhat intricate and composed of a number of parts that each come with several design choices - the paper strikes a good balance between laying out these options and explaining the ones taken in the paper, and then separating the novel parts from previously published parts. In the end this section can get quite dense, and it might benefit from some additional summary or overview diagram at the end of Section 3 (though I really enjoyed Sec. 4.1, the paper would also work if it were pushed to the appendix if more space is needed). The experiments are sensible and of small to medium-sized scale, and I appreciate the use of a realism-focused simulator for gene-regulation networks instead of a larger-scale synthetic experiment. The focus on out-of-distribution performance by investigating several OOD settings is particularly important for amortized methods, and I think it is well executed in the paper and results show the good performance and generalizability of the method. As a non-domain expert I cannot comment on the strength of the baselines that the method is being compared against (and have lowered my confidence accordingly) - though they seem to be non-trivial and sensible baselines. Overall I think the paper is very well written, makes a novel and original contribution (that includes putting many existing techniques together), the empirical investigation is thorough and convincing, and the work is of interest to a wide audience. I currently recommend to accept the paper, and have some (mostly optional) suggestions for improvement. **Improvements:** 1. Ablations for design choices: to support the design choices and get a sense of the sensitivity of the results some additional ablations would be interesting (I do not consider these to be critical for publication, but they would further strengthen the paper; the rebuttal period may be too short to conduct these): 1. How sensitive is the performance of the method to the underlying AVICI and the regime it has been trained on, ie., does AVICI first need to be carefully tuned for the application domain or can a relatively general AVICI model be used to easily obtain good results on many domains? 2. How sensitive are results in terms of the network architecture of the policy (network size, other activation functions)? 2. Another interesting experiment could be to test how well the amortized policy generalizes when increasing the intervention budget beyond the maximal training range (does the policy continue to perform well / degrade gracefully, or does it eventually break down catastrophically?). Like above, I consider this to be optional, but interesting. 3. To make the paper even more widely accessible consider having a brief paragraph in the background section on amortization. What is it? What are the advantages? What are the potential drawbacks (in-distribution vs. ood generalization “guarantees”). 4. Maybe have a brief summary or overview figure, etc. at the end of Sec. 3 that summarizes the choices made in this paper (as opposed to the potential alternatives for the various parts which are pointed out nicely already) and identifies the novel parts and/or consider having a summary of contributions (e.g., at the end of the intro). Technical Quality: 3 Clarity: 4 Questions for Authors: **Minor comments:** 1. L108-109 (nit): L108 states that the posterior is trained “without having access to the likelihood of the data’, but L109 states that “the posterior can be trained by maximum likelihood with…”. Maybe rewrite to avoid possible confusion. 2. L111: Maybe add that “Empirically it has been shown that the AVICI model can generalize to…”. 3. L120: typo “they”. 4. L123: What is “technical noise”? (maybe add a reference to 5.2 where this is explained) 5.Zero-shot OOD (Fig. 4 and 5) - just to clarify: performance of the policy in this regime is only indirectly/implicitly reliant on good generalization of AVICI (since there is no RL at this stage; the reward model got “baked” into the amortized policy)? To improve OOD performance via additional learning it is plausible that fine-tuning the policy with fixed AVICI would be sufficient? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper clearly states limitations throughout and in a short limitations section. One general limitation (that is obvious to readers familiar with amortized algorithms) is that amortization typically only comes with in-distribution guarantees, and no ood-guarantees. Perhaps it is worth adding this to the limitations section (and clearly, the paper puts a lot of emphasis on empirically investigating this generalization). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the very positive feedback of our work. We provide detailed response to your questions below: - “Sensitivity to AVICI”: That’s a good point. In our work, we use a pre trained AVICI which has been trained on datasets drawn from wide range of SCM and noise distributions. So a relatively general AVICI model sufficed. But if amortization performance is more important than the OOD, having an AVICI model which is trained on a more narrow domain should further improve the results of the policy due to the more informative reward function. - “Increasing the intervention budget”: Thanks a lot for the great suggestion. The performance of the policy is based on the AVICI model. So as we give AVICI more interventional data, we expect performance to improve, but the returns become more incremental. But it is nevertheless interesting to think about increasing the intervention budget during training, especially to investigate until when the policy remains competitive as compared to random. We will include this experiment in the camera-ready. - Given the extra page for camera-ready, we will expand a bit on amortization, summary of contributions and also a schematic figure of the training setup (which was requested by other reviewer as well). - Typos and minor comments: Thanks a lot for the correction. We will correct these typos and sentences. We agree that it will help make the paper more readable. - “performance of the policy in this regime is only indirectly/implicitly reliant on good generalization of AVICI (since there is no RL at this stage; the reward model got “baked” into the amortized policy)?”Indeed, the evaluation still relies on a causal structure learning model, in which case we still use AVICI. As a result, the performance of the policy in OOD regime still implicitly relies on good generalization of AVICI. One could also consider the possibility of updating the AVICI model when updating the policy, but we only study the simpler setting in this work. - We will add the limitations about OOD generalization guarantees for amortized algorithms. We hope that addresses your concerns and questions. We would be glad to answer any further questions you may have. --- Rebuttal Comment 1.1: Title: All concerns addressed / questions answered Comment: Thank you for the response. I have no more open questions or comments. Reading the other reviews and responses, I personally believe that all criticism has been sufficiently addressed. I think o26x raises an important point - which is to emphasize how it is often hard to beat a random baseline - and I think it would be nice to discuss this with more emphasis in the paper (discussion section, or limitations section). But I do not think that this fundamentally invalidates the approach and the careful analysis and comparison against SOTA baselines. I therefore remain in favor of accepting the paper and raise my confidence. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks a lot for the discussion and the positive assessment of our work. As indicated to the reviewer o26x, we will include a discussion of the competitiveness of the random baseline for the SERGIO environment.
Summary: The authors propose an intervention design method, Causal Amortized Structure Learning, **CAASL**, which works by using a transformer to directly predict the next intervention to perform given a simulator of the environment (in which the intervention is to be performed) using the Soft Actor Critic (SAC) policy. The rewards are obtained by using the amortized posterior of the adjacency matrix of the causal graph given interventional data using AVICI, a recently proposed likelihood-free approach for obtaining an estimate of the causal graph. The authors demonstrate the amortization performance of CAASL on synthetic data and SERGIO (a single cell gene expression simulator) and its generalization to distribution shifts. Importantly, CAASL doesn't require inference of the causal graph. Strengths: - The paper tackles the important problem of designing an intervention policy in environments where the likelihood-based inference of a causal graph is intractable and it might be difficult to utilize the data likelihood to select interventions. - The use of the pretrained AVICI model as a reward function can potentially be more generally applicable-- - it formalises the reward of performing an intervention in the environment as the increase in the number of correct entries in the adjacency matrix $A$ of the causal graph resulting from the intervention-- and this is computable since we have access to the simulator, and hence $A$. - this reward function is also shown to be kind of optimizing the lower bound on the multi-step expected information gain (EIG) by learning the adjacency matrix $A$ for the policy - The notation and writing of the paper is very clear, making it approachable. Weaknesses: Experiments: - How dense/sparse can $A$ be? - What are some other intervention design approaches and can they be used as baselines? - Can you provide some insights on why the random baselines is competitive when the intervention type changes on SERGIO? Writing: - on what kinds of datasets is CAASL applicable-- what are some other datasets where we can access $A$? - can you clearly establish the link between sequential Bayesian experimental design and the multi-step EIG-- where does the related BA bound appear in Bayesian experimental design (such that it doesn't involve a reading of ref.17)? - If the BA bound uses the log-likelihood of the true causal graph under the amortized posterior, how does comparing $A$ and $\hat{A}$ make it still connected to the BA bound? - (minor): can you clarify what is meant by line 59 "As such, CAASL is an intervention design method for performing sample efficient causal structure learning, but is not a new causal structure learning method in itself." Technical Quality: 4 Clarity: 3 Questions for Authors: - just curious, why is it CAASL and not CASL? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors discuss some of the limitations of their work, but I think the limitations with respect to experimental results on changing intervention type, and types of simulation environments where this method is applicable can be clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your detailed and very positive review of our work. We provide answers and clarifications to your questions below: - "How dense/sparse can $A$ be?" For training the policy, we set the density of graphs to 3 edges on average per node. Amortization can also be achieved across different density of graphs by simply sampling from a prior over graphs which is more denser/ sparser during training. - "What are some other intervention design approaches and can they be used as baselines?": For the synthetic design environment setting, any of the approaches listed in L119 can be potential baselines. However, we compare with diffCBED as it is the most recent and is shown to perform better than other possible baselines. For GRN environment, we are not aware of any baselines apart from random as it requires access to the likelihood. - "Why is random strategy competitive when intervention type changes in sergio"?: The input to the policy is the experimental history, which only contains the interventional sample and the corresponding {0,1}-vector interventional target. When the intervention type changes, the semantics of the data generating process also changes significantly, which the policy is unaware of since the input to the policy is still the same. Likewise, there are no specific inductive biases that are encoded in either the training procedure or the policy architecture that might result in a good generalization. We will clarify this limitation in the revision. In order to get better results in a different intervention type, we can amortize over different intervention types by training the policy on data generated by different intervention types. However, in this work, we deliberately train on a narrow domain to demonstrate the OOD generalization properties. - "on what kinds of datasets is CAASL applicable-- what are some other datasets where we can access ?" CAASL is applicable to any datasets/ settings that has a corresponding simulator. Some of the examples of causal structure learning simulators and applications are given in line 199. - “can you clearly establish the link between sequential Bayesian experimental design and the multi-step EIG-- where does the related BA bound appear in Bayesian experimental design (such that it doesn't involve a reading of ref.17)?”: The link between sequential Bayesian experimental design and related BA bound is discussed in Blau et al. Ref. [7]. - "can you clarify what is meant by line 59 "As such, CAASL is an intervention design method for performing sample efficient causal structure learning, but is not a new causal structure learning method in itself." In this sentence, we mean to emphasize that this work focuses on the active learning/ intervention design part rather than focusing on how to do just causal structure learning given data. - "If the BA bound uses the log-likelihood of the true causal graph under the amortized posterior, how does comparing and make it still connected to the BA bound?" As mentioned in line 230, although the proposed reward function is not exactly connected to BA bound, comparing the normalized accuracy, i.e. log-probs of the true graph would result in a BA bound. - "Why CAASL and not CASL": We would like to emphasize both the active learning part and the amortization part, hence the acronym CAASL. We hope that addresses your concerns and questions. We would be glad to answer any further questions you may have. --- Rebuttal Comment 1.1: Title: reply to authors Comment: I thank the authors for clarifying my questions and for planning to expand on the baselines and their explanations in the final draft. I remain positive about accepting the paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for the detailed feedback regarding our work. We are delighted to find that in general the reviewers find our work tackles an important problem (rMit), well-written (ydzf, rMit, EZYL), novel and interesting combination of existing ideas (o26x, EZYL) and provides clear empirical analysis (ydzf, EZYL). In addition, for the camera-ready, we will incorporate the following changes as suggested by the reviewers: 1. Summarize the contributions at the end of the introduction. 2. Include a schematic figure of the training of CAASL, which shows the network architecture, simulator and the AVICI model used for reward function. 3. Experiments with respect to generalization when the experiment budget is more than that during training. 4. We will expand the limitations section to include the lack of theoretical guarantees wrt OOD generalization. 5. Expanded explanation of the baselines and their limitatiions. We have answered to each reviewer's questions and concerns that answers the above points in more detail. We would be happy to answer further questions if there are any.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Discrete Modeling via Boundary Conditional Diffusion Processes
Accept (poster)
Summary: The authors propose a discrete diffusion model that operates in the embedding space of vectors that represent discrete tokens. They propose a method that takes into account the fact that whole regions of the space will decode to the same discrete token because during decoding, the closest embedding vector is selected. The method is that at training time, instead of interpolating between $x_0$ and $x_1$, they interpolate from $x_{t_0}$ to $x_1$ where $x_{t_0}$ is the point at which the line between $x_0$ and $x_1$ intersects the 'discrete boundary' of $x_0$ which is the region of embedding space that all decodes to $x_0$. The authors test their method on language tasks as well as CIFAR-10 image modeling. Strengths: The idea of trying to exploit the fact that the embedding space has a very specific structure consisting of volumes that all decode to the same discrete token is interesting and novel. The experimental results look quite promising with good performance compared to other discrete diffusion approaches on machine translation and text summarization. The authors are working on a significant problem as finding diffusion methods that can perform on par with autoregressive models for discrete modeling would be very impactful. Weaknesses: The paper's main weakness is lack of clarity and precision in the description of the method. I really struggle to understand what the authors are doing exactly and how it is justified. - It is not clear to me that your training objective and sampling regime are justified. The flow matching paradigm is to construct a $p_t(x_t | x_0)$ probability path and then find a $u_t(x_t | x_0)$ vector field that generates this probability path through the transform defined by ODE integrating along the $u_t(x_t | x_0)$ vector field. The final step is to say that a vector field that generates the unconditional probability path $p_t(x_t)$ is equal to $\mathbb{E}\_{p(x_0 | x_t)} [ u_t(x_t | x_0) ]$ and this can be learned tractably with an objective of $\mathbb{E}\_{p(x_0) p(x_t | x_0)} [ || v_t^\theta(x_t) - u_t(x_t | x_0) ||^2 ]$. However, I don't see how your method fits in this framework. This is for two reasons: 1) at no point do you define the conditional vector field $\tilde{u}\_t(\tilde{x}\_t | x\_0)$ that generates your rescaled probability path $\tilde{p}\_t(\tilde{x}\_t | x\_0)$. How are you able to generate along these probability paths without defining $\tilde{u}\_t(\tilde{x}\_t | x\_0)$ ? 2) You sample using $u\_t(x\_t | x\_0)$ for the original $p\_t(x\_t | x\_0)$ but with the $\hat{x}\_0$ prediction from the neural network. This is valid in the original flow matching case because $u\_t(x\_t | x\_0)$ is linear and so the $\mathbb{E}\_{p(x\_0) p(x\_t | x\_0)} [ || v\_t^\theta(x\_t) - u\_t(x\_t | x\_0) ||^2 ]$ objective can be re-arranged into $x\_0$ prediction $\mathbb{E}\_{p(x\_0) p(x\_t | x\_0)} [ || x\_\theta(x\_t , t) - x\_0 ||^2 ]$. However we don't necessarily know if $\tilde{u}\_t(\tilde{x}\_t | x\_0)$ that generates $\tilde{p}\_t(\tilde{x}\_t | x\_0)$ is linear so we can't be sure it is valid to re-arrange to $x\_0$ prediction. - I think after the method is explained in section 3, the authors should have text that revisits the probability contour intuition they provide during the introduction. I think this is a nice mental model to understand the method but it is very unclear how exactly the method presented in section 3 gives you the desired probability contours that are promised in the introduction. For example, if a contour is very long in embedding space, then definitely the far away points on the contour with have low probability under $\tilde{p}\_t(\tilde{x}\_t | x\_0)$ so therefore probability is not constant along the discrete boundary and therefore the discrete boundary does not line up with the probability contour. These kinds of doubts would be alleviated with a clearer presentation linking back to the original probability contour view and some more precise statements. - The discussion relating to the confidence factor is very unclear. Firstly, I'm unsure why it is even called a confidence factor, we can be fully confident that our data is discrete so why is there uncertainty about this? It is also concerning that the method can oscillate and mode collapse when r=1 which is your full method. This sounds like a major flaw of the approach if it cannot work at all when your new method is used in isolation. This warrants more discussion regarding this failure case. Furthermore L195 should be explained more as it is very unclear what "fixing the initial path" means and it sounds very important if it is required for your method to work. Regarding the experiments: - It seems unfair to allow your method to be re-ranked in Table 1 if you don't also re-rank the transformer baseline - In Table 3, it is hard to get a good sense of how your method compares to other discrete diffusion approaches. Since they all use different embedding strategies it seems that it is only a good comparison to compare methods within the same embedding strategy. In this case, you are really only comparing to bit diffusion but your reproduction seems to perform much worse than the originally stated results in the bit diffusion paper. It is therefore difficult to place your method with confidence amongst the other discrete diffusion methods. My score is given on the basis of the very unclear presentation meaning I cannot understand the method the author's are proposing. If this can be cleared up in the rebuttal period, I am happy to raise my score. Technical Quality: 2 Clarity: 2 Questions for Authors: Here I have listed some more minor questions for the authors that can help improve the exposition. My main concerns are given in the previous discussion. - How does your formulation handle the fact that for some $x\_0$ and $\epsilon$ samples, there will never be a boundary crossing because $x\_0$ may be on the outside of the space or $\epsilon$ is sampled within $x\_0$'s discrete boudary. - On L187 (24) is already deterministic because its an ODE so I don't understand why you then propose a deterministic alternative. - In Table 2 it is unclear what ablation you are actually doing, what does it mean to only rescale the forward process versus rescaling the forward and backward? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: I don't believe the authors adequately describe the limitations of their method when r is near 1, they say that it can be unstable but provide no further details or explanation regarding this failure case. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your meticulous review and the valuable comments. Please kindly find our response below. **Q1: The rescaled vector field and probability path** *Due to the character limit, this is only an outline. Please refer to comments for more details.* 1. Our neural network predicts $\mathbf{x}_0$ not $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$. 2. $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$ is tractable but we did not derive it because it is not used. 3. We generate the probability path with the deterministic reverse diffusion process (Equation 25). 4. The training objective $||\mathbf{x}_0-\mathbf{x} _\theta||$ is valid. Derivations in Comments. **Q2: Method and the probability contour** *Due to the character limit, this is only an outline. Please refer to comments for more details.* 1. We have demonstrated the rescaled probability contours of $\tilde{p}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$ in Figure 2, where sample points are exactly calculated by our proposed method (Equation 22). 2. $\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ is a series of functions, where the consistency to the boundary contour is inversely proportional to the subscript $t$. **Q3: Confidence factor** *Due to the character limit, this is only an outline. Please refer to comments for more details.* 1. The confidence factor is named after some thoughts of sampling, such as the confidence interval. Our data is discrete and the uncertainty comes from the learned diffusion model. 2. The mode collapse when $r=1$ comes from two part: One is the uncertainty of the learned diffusion model as described above. The other involves a traditional problem of diffusion processes. 3. Fixing the initial path means the Trajectory Alteration (Line 8 in Algorithm2) is optional in practice. This is a negligible detail of our model. **Q4: Re-ranking** This is a fair comparision because beam searching the generated results of the transformer is re-ranking with transformer. Re-ranking is a traditional strategy widely used in Non-autoregressive translation[7,8]. We use the generated results of our method re-ranked by the transformer to demonstrate their upper bound. Since the generation of diffusion LMs are highly random, their own probability scores often cannot reflect the upperbound of their performance. Re-ranking the generated results with an auto-regressive model can better demonstrate the potential capability of the diffusion model. Besides, re-ranking the transformer baseline just gets what it generates. Because **beam search is exactly re-ranking the generated contents with the transformer's probabilities themselves.** **Q5: Table 3** 1. **We believe it is a reasonable comparision because we strictly follows the setting of BitDiffusion[9] as in the Table 1 of their paper.** We want to clarify that the embedding strategy is only valid for continuous diffusion models, where there is not embedding stage for discrete diffusion models. We compare the continuous diffusion models and the discrete diffusion models under the same level of continuous information, Continuous Pixels, Discrete Ordinal Pixels, and Categorical Pixels, where the latter the less continuous information models can obtain. 2. **We have clarified in Lines 285-287 that since they only provide the model code without a training script with detailed hyperparameters or a checkpoint, we have to reproduce with exactly the same configuration based on their code and paper.** We have done our best effort to reproduce the results and we promise to release our code and reproduction. Besides, we achieve the best reproduction result in the open source community. We are confident in the reliability of our experimental results and hope to address the reviewer's concerns. 3. **Even ignoring the above issues, our method still shows effectiveness.** Considering the gap introduced by the continuous information that the original BitDiffusion (Fid 3.48) can not beat DDPM (Fid 3.17) under the Gaussian sampling, our method (Fid 3.86) achieves improvement over DDIM (FiD 4.04), which are both deterministic diffusion processes. This means that our approach can achieve better results with less continuous information. **Q6: Samples in the boundary** 1. **It is possible that a random noise $\boldsymbol{\epsilon}$ is sampled within $\mathbf{x}_0$'s discrete boundary.** However, the probability of this happening in a high-dimensional representation space is very low. According to our statistics during the training phase, this probability is less than 0.1%. 2. **Our solution is demonstrated in the pseudo code (Line 639, Appendix F), where we mask out this situation when $f(\boldsymbol{\epsilon},\mathcal{I})>f(\boldsymbol{\epsilon},\mathcal{J})$.** This means that random noise $\boldsymbol{\epsilon}$ sampled within $\mathbf{x}_0$'s discrete boundary will not be rescaled and its trajectory is the same as original DDPM. **Q7: Line 187** Similar to 3.2 in Q1, as demonstrated in Lines 186-189, we provide an alternative for the ODE decoding, which is a deterministic reverse process. **The word 'deterministic' is used to reflect the difference between our approach and traditional Gaussian process in DDPM**, because both of them model the reverse process. **Q8: Ablation in Table 2** 1. **First three lines in Table 2 reveal where the performance improvement of our method mainly comes from.** Only rescale the forward process means we just train the diffusion model with the rescaled trajectory and reverse the trajectory as in the DDPM. Rescaling the forward and backward means we use the rescaled trajectory for both training and inference. The results in Table 2 shows that training with the rescaled trajectories makes a major contribution to the final performance of our method. 2. **Last line in Table 2 reveals that our method is robust for different trajectory types that optimal transport trajectory used in Flow Matching can sometimes achieve better results.** --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for the detailed rebuttal. I am happy with the responses to my questions regarding the probability contour, confidence factor, samples on the boundary, the deterministic sampler and Table 2. However, I am still confused about your overall method. I appreciate the derivation of $\tilde{u}_t(\tilde{x}_t | x_0)$ however this still leaves me with questions about the loss you use. You justify the $x_0$ prediction by saying that we are trying to match $\tilde{u}_t(\tilde{x}_t | x_0)$ with $\tilde{u}_t(\tilde{x}_t | x_0^\theta)$ which yes they would be equal if you set $x_0 = x_0^\theta$. However this misunderstands what the loss is actually doing. When we do flow matching, we use the L2 loss because we would like to have $\tilde{u}_t^\theta(\tilde{x}_t) = \mathbb{E}[ \tilde{u}_t(\tilde{x}_t | x_0) | \tilde{x}_t] $ which is the solution to min $\mathbb{E} [ || \tilde{u}_t(\tilde{x}_t | x_0) - \tilde{u}_t^\theta(\tilde{x}_t) ||^2 ]$. If you then solve min $\mathbb{E} [ || x_0 - x_0^\theta (\tilde{x}_t) ||^2]$ you will obtain $\mathbb{E} [ x_0 | \tilde{x}_t ]$. But we don't necessarily have $ \mathbb{E}[ \tilde{u}_t(\tilde{x}_t | x_0) | \tilde{x}_t] = \tilde{u}_t(\tilde{x}_t | \mathbb{E} [ x_0 | \tilde{x}_t ]) $ which would be required for your loss to be justified. This is ok in standard flow matching since $u$ is linear but it is not clear in your case. Regarding Table 1, am I to understand you used beam-search for the transformer baseline and this is why you say this is equivalent to re-ranking your method? Why didn't you just use normal sampling for the transformer baseline and normal sampling for your method, I feel like that would be a much clearer narrative and not have to imply your method requires re-ranking to work well. For Table 3 it is good that you implemented Bit Diffusion and got the best open source results, however, it doesn't really help make your narrative clear since the results are worse than those reported in the original bit diffusion paper. Why didn't you just use standard flow matching on the same discrete embedding space as your method but just without your boundary method. That would have been the closest and simplest baseline and would have made the precise benefits of your method clear. --- Rebuttal 2: Title: Detailed answer for Q1: The rescaled vector field and probability path (Part 1/2) Comment: **Q1: The rescaled vector field and probability path** We do not define the vector field $\tilde{u}_t(\tilde{\mathbf{x}}_t|x_0)$ in our framework, because our neural network predicts $\mathbf{x}_0$ and we generate the probability path with the deterministic reverse diffusion process. 1. **Why do we predict $\mathbf{x}_0$?** $G(\mathbf{x}_0,\boldsymbol{\epsilon})$ is the key of our method during both training and inference stage, we are required to have a low error estimation on $\mathbf{x}_0$ (Lines 180-182). Predicting $\tilde{u}_t(\tilde{\mathbf{x}}_t|x_0)$ or $\boldsymbol{\epsilon}$ will amplify the errors in the neural network's output. 2. **How to define the vector field $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$ if we want?** > Preliminary: $u_t(\mathbf{x}_t|\mathbf{x}_0) = \frac{\mathrm{d}\mathbf{x}_t}{\mathrm{d}t}$ (Equation 11-13 in Flow Matching [1]) > > In our framework: $\tilde{\mathbf{x}}_t = \mathbf{u}(\mathbf{x}_0,\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon})))\mathbf{x}_0+\mathbf{v}(\mathbf{x}_0,\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon})))\boldsymbol{\epsilon}$ (Equation 22) > > Therefore: $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0) = \frac{\mathrm{d}\tilde{\mathbf{x}}_t}{\mathrm{d}t} = \left[\mathbf{u}'(\mathbf{x}_0,\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon})))\mathbf{x}_0+ \mathbf{v}'(\mathbf{x}_0,\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon})))\boldsymbol{\epsilon}\right]\frac{\mathrm{d}\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon}))}{\mathrm{d}t}$ > > Given: $\mathcal{T}(t,t_0) = r\times t_0 + t\times (T-r\times t_0)/T$ (Eqaution 19), we have $\frac{\mathrm{d}\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon}))}{\mathrm{d}t}=\frac{T-r\times G(\mathbf{x}_0,\boldsymbol{\epsilon})}{T}$ > > Let $\tau$ denote $\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon}))$ > > Hence: $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0) = \frac{\mathrm{d}\tilde{\mathbf{x}}_t}{\mathrm{d}t} = \left[\mathbf{u}'(\mathbf{x}_0,\tau)\mathbf{x}_0+ \mathbf{v}'(\mathbf{x}_0,\tau)\boldsymbol{\epsilon}\right]\frac{T-r\times G(\mathbf{x}_0,\boldsymbol{\epsilon})}{T}$, which is actually tractable. 3. **How to generate the probability paths?** - 3.1 **One direct solution is Equation 24, which is derived from the Theorem3 in Flow Matching[1]** > Let $\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ be a probability path, given equation 22, there is $\boldsymbol{\epsilon}=\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}$. > > Replacing $\boldsymbol{\epsilon}$ in $\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$ with $\mathbf{x}$, we have $\tilde{u}_t(\mathbf{x}|\mathbf{x}_0)= \left[ \frac{\mathbf{v}'(\mathbf{x}_0,\tau)}{\mathbf{v}(\mathbf{x}_0,\tau)}(\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0)+\mathbf{u}'(\mathbf{x}_0,\tau)\mathbf{x}_0\right]\frac{T-r\times G(\mathbf{x}_0,\boldsymbol{\epsilon})}{T}$. > > Therefore, we generate the probability path with $\tilde{u}_t(\mathbf{x}|\mathbf{x}_0)$. However, **this may be inefficient in practical use because we have to solve the equation $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ to get the $\tau$ with respect to the change of $\mathbf{x}$ in real time.** This is tractable for special cases such as original Flow Matching (Equation 42 in Appendix E), but it will be much more complex for Diffusion trajectories (Equation 43 in Appendix E). Since our approach is a general framework for all continuous diffusion processes, including both Diffusion Model and Flow Matching, **we actually use an alternative approach similar to the DDIM[2]**. *It's worth noting that we just demonstrate the vector field function $u_t(\mathbf{x}|\mathbf{x}_0)$ in Equation 24 to show its complexity. We do not provide the derivation of $\tilde{u}_t(\mathbf{x}|\mathbf{x}_0)$ because we do not want to distract readers from our actual generation method by spending too much time introducing and deriving this function that is totally not used. However, as pointed in this question by reviewer, we will add the above derivation into the appendix to ensure the integrity our framework.* --- Rebuttal 3: Title: Detailed answer for Q1: The rescaled vector field and probability path (Part 2/2) Comment: - 3.2 **Our alternative solution is Equation 25 and Algorithm 2, which works like the DDIM[2].** In short, our solution can be understood as deterministic reverse diffusion process or ODE with discrete time steps. We use a state transfer probability (Equation 25) $\tilde{p}([\tilde{\mathbf{x}}_ {t_ 1};\tau_ 1]|[\tilde{\mathbf{x}}_ {t_ 2};\tau_2],\mathbf{x}_ 0)$ to extend the traditional reverse process $p(\mathbf{x}_ {t_ 1}|\mathbf{x}_ {t_ 2},\mathbf{x}_ 0)$, where $1\leq t_1 < t_2 \leq T$. As in diffusion processes[3], if we set the timestep interval as 1, the probability path is obtained with $p_ t(\mathbf{x}|\mathbf{x}_ 0) = p(\mathbf{x}_ T|\mathbf{x}_ 0) \prod\limits_{s=t+1}^{T} p(\mathbf{x}_ {s-1}|\mathbf{x}_ {s},\mathbf{x}_ 0)$. In our framework, we get $\tilde{p}_ t([\tilde{\mathbf{x}};\tau]|\mathbf{x}_ 0)=p([\tilde{\mathbf{x}}_ T;T]|\mathbf{x}_ 0)\prod\tilde{p}([\tilde{\mathbf{x}}_ {t_ i};\tau_ i]|[\tilde{\mathbf{x}}_ {t_ {i+1}};\tau_ {i+1}],\mathbf{x}_ 0)$. Since the state transfer probability is a Dirac delta function, there is no randomness in the reverse process, where the $[\tilde{\mathbf{x}}_t;\tau]$ pair can be iteratively generated deterministically (Algorithm 2). 4. **How to derive the training objective $\min ||\mathbf{x}_ 0-\mathbf{x}_ \theta||$?** - 4.1 **From the perspective of the deterministic reverse process** We utilize the deterministic reverse diffusion process to generate data (Equation 25). Similar to DDPM[3], the learning objective for the unrescaled deterministic reverse diffusion process is derived in Appendix C.3 Equation 38, which can still be simplified to $||\mathbf{x}_ 0-\mathbf{x}_ \theta||$. For our rescaled reverse process $\tilde{p}([\tilde{\mathbf{x}}_ {t_\Delta};\tau_ \Delta]|[\tilde{\mathbf{x}}_ {t};\tau],\mathbf{x}_ 0)$, where both $\tau$ and $\tilde{\mathbf{x}}_ t$ are inputs, the objective is an equation set: $$ \left\[\begin{aligned} \mathbf{u}_ {\tau_ \Delta}\mathbf{x}_ 0 + \mathbf{v}_ {\tau_ \Delta}\hat{\boldsymbol{\epsilon}}&= \mathbf{u}_ {\tau_ \Delta} \mathbf{x}_ \theta + \mathbf{v}_ {\tau_ \Delta}\hat{\boldsymbol{\epsilon}}\\\\ \mathcal{T}(t-\Delta t,G(\mathbf{x}_ 0,\hat{\boldsymbol{\epsilon}})) &= \mathcal{T}(t-\Delta t,G(\mathbf{x}_ \theta,\hat{\boldsymbol{\epsilon}})) \end{aligned}\right\] \Rightarrow \left\[\begin{aligned} \mathbf{x}_ 0 &=\mathbf{x}_ \theta \\\\ G(\mathbf{x}_ 0,\hat{\boldsymbol{\epsilon}}) &= G(\mathbf{x}_ \theta,\hat{\boldsymbol{\epsilon}}) \end{aligned}\right\], $$ where $\hat{\boldsymbol{\epsilon}}=\Psi^{-1}([\tilde{\mathbf{x}}_ t,\tau])$ is a constant. Besides, $\tau_ \Delta$ is a constant for the above equation. The unique solution to this equation set is $\mathbf{x}_\theta=\mathbf{x}_0$. * 4.2 **From the perspective of re-arranging** Given $\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) = \left[\mathbf{u}'(\mathbf{x}_ 0,\mathcal{T}(t,G(\mathbf{x}_ 0,\boldsymbol{\epsilon})))\mathbf{x}_ 0+ \mathbf{v}'(\mathbf{x}_ 0,\mathcal{T}(t,G(\mathbf{x}_ 0,\boldsymbol{\epsilon})))\boldsymbol{\epsilon}\right]\frac{T-r\times G(\mathbf{x}_ 0,\boldsymbol{\epsilon})}{T}$, we can define $\tilde{v}_ \theta(\tilde{\mathbf{x}}_ t)=\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta)$. To get the $\min ||\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta)||$, we can direct calculate the partial derivate. For any $i$ dimension of $\mathbf{x}_\theta$, there is: $$ \frac{\partial ||\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) ||}{\partial \mathbf{x}^i_ \theta} = \frac{\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta)}{||\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) || } \frac{\partial \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) }{\partial \mathbf{x}^i_ \theta} =0 $$ We can drop $\frac{\partial \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) }{\partial \mathbf{x}^i_ \theta}$, because it can not provide valid value of $\mathbf{x}_ \theta$ without the term of $\mathbf{x}_ 0$. Solving the equation $\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) = 0$ is difficult, but it is easy to prove that $\mathbf{x}_ \theta = \mathbf{x}_ 0$ is a solution. Since $\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0)$ is deterministic, it is an injection function. This means that the same input will not produce different outputs. Therefore, $\mathbf{x}_ \theta = \mathbf{x}_ 0$ can always get the minimum value of $||\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta) ||$ and we simplify the objective to $\min ||\mathbf{x}_ 0 - \mathbf{x}_ \theta||$ for convenience and training stability. --- Rebuttal 4: Title: Detailed answer for Q2: Method and the probability contour Comment: **Q2: Method and the probability contour** 1. **We have demonstrated the rescaled probability contours of $\tilde{p}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)$ in Figure 2, where sample points are exactly calculated by our proposed method (Equation 22).** As illustrated in Figure 2, the probability contours calculated by our method can be easily adapted to different discrete boundaries and noising trajectories, which are inline with our expectations in Figure 1 of the Introduction Section. *Thank you for this suggestion and we will add a quick revision at the end of the Method Section and link our formula to the Figure 2.* 2. **$\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ is a series of functions, where the consistency to the boundary contour is inversely proportional to the subscript $t$.** As illustrated in Figure 2 and Equations 21 and 22, $\tilde{p}_0(\mathbf{x}|\mathbf{x}_0)$ is exactly the boundary contour while $\tilde{p}_T(\mathbf{x}|\mathbf{x}_0)$ is a Gaussian distribution. When a boundary contour is very long in embedding space, far away point will gradually have a lower probability density under $\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ as $t$ increases. This means, **during the inference stage, the density probability will gradually be consistent with the contour when approaching the boundary, which is inline with Lines 46-51 and Figure 1B.** Besides, we will not face the extreme case where the boundary contour is an infinitely long straight line in practical application. Because the embedding space is a finite space and the values of the embedding points will be normalized into a finite range. Therefore, when we set the diffusion space a bit larger than the embedding space, there are always series of probability functions $\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ from Gaussian distributions to the boundaries. --- Rebuttal 5: Title: Detailed answer for Q3: Confidence factor Comment: **Q3: Confidence factor** 1. **The confidence factor is named after some thoughts of sampling, such as the confidence interval. Our data is discrete and the uncertainty comes from the learned diffusion model.** Consider a simplified situation of only two tokens 'A' and 'B', which is the binary classification. When the confidence factor $r=1$, points $\mathbf{x}$ on the boundary of 'A' strictly follow $p(\mathbf{x}\in A)=p(\mathbf{x} \in B) = 0.5$. Since the learned diffusion model is expected to guide a random noise to this boundary, any subtal perturbation to the output of our learned diffuison model may lead to a reverse of the $\mathbf{x}$'s attribution. If we decrease the factor $r$ so that points on the boundary of the token 'A' have higher probability density, e.g., $p(\mathbf{x}\in A)=0.7>p(\mathbf{x}\in B)$, we will be more confident that our generated points of the learned diffusion model belong to the token 'A'. (In this situation, $p(\mathbf{x}\in A)\geq 0.7$ is the discrete area of A, $p(\mathbf{x}\in B)\geq 0.7$ is the discrete area of B, and $0.3 < p(\mathbf{x}\in A)< 0.7$ is the unattributed area.) 2. **The mode collapse when $r=1$ comes from two part: One is the uncertainty of the learned diffusion model as described above. The other involves a traditional problem of diffusion processes.** - 2.1 **From the perspective of uncertainty** Facing tasks with strong conditions, we are confident with the model's prediction since this is a easy task. Therefore, $r=1$ works perfect (Lines 227-229 and Table 1) as the strong conditions greatly reduce uncertainty of the learned model during inference. When the condition is weak or there is no condition, uncertainty increases rapidly. This means that the sampled noisy training data constitutes a more difficult task with $r=1$. When the task difficulty exceeds the model's capability, the collapse occurs. - 2.2 **From the perspective of diffusion processes** This is also a problem similar to the well studied one in Flow Matching with Dynamic Optimal Transport [4,5,6], i.e., the path from the noise to target is too long and learning this path is hard. For example, suppose there are two unconditional targets 'A' and 'B', the sampled noisy data $\tilde{\mathbf{x}}_A$ of 'A' when $r=1$ may have a shorter path to 'B' than 'A'. Likewise, $\tilde{\mathbf{x}}_B$ can be closer to A. The diffusion models are required to map $\tilde{\mathbf{x}}_A$ to 'A' and $\tilde{\mathbf{x}}_B$ to 'B', but it is easier to learn the mapping of $\tilde{\mathbf{x}}_A$ to 'B'. **This is a general problem for all diffusion models, not just ours.** We believe the stability and performance of our method can be further improved with the Dynamic Optimal Transport, but we want to have a fair comparision with our baselines to demonstrate our effectiveness. Therefore, we just use a smaller $r$ to stabilize the training process and use the sub-optimal results to compare with the baselines. Besides, the failure case is just the unexpected loss value during training, e.g., NaN, where these models are unable to generate contents. 3. **Fixing the initial path means the Trajectory Alteration (Line 8 in Algorithm2) is optional in practice. This is a negligible detail of our model.** - 3.1 **From the perspective of uncertainty** Line 8 in Algorithm 2 is to update the trajectory. If the learned diffusion model predicts a different $\mathbf{x}_0$, we have to update the original noise because currently we are in a different trajectory. If the learned model keeps changing the predicted target, which means it is uncertain about where to go, updating the trajectory correspond to an unreliable target may not be beneficial for the denoising process. If the learned model rarely changes its prediction, it means we are on the right path and no updates are needed. Therefore, it would be a viable option to discard this step (Line 8 in Algorithm 2) to make decoding faster. - 3.2 **From the perspective of experiments** As illustrated in Table 3, if we train with the rescaled forward process and decode with unrescaled trajectory, there will be only a slight performance degradation. The performance improvement of our model mainly comes from the training process, and **some small changes in the decoding process may not have much impact.** In practical application, we just discard the trajectory update (Line 8 in Algorithm 2) for simplicity in our experiments and find almost no performance degradation. --- Rebuttal 6: Title: References in Rebuttal Comment: [1] Flow Matching for Generative Modeling [2] Denoising Diffusion Implicit Models [3] Denoising Diffusion Probabilistic Models [4] Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow [5] Rectified Flow: A Marginal Preserving Approach to Optimal Transport [6] Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport [7] Non-Autoregressive Neural Machine Translation [8] Understanding Knowledge Distillation in Non-Autoregressive Machine Translation [9] Analog Bits: Generating Discrete Data Using Diffusion Models with Self-Conditioning --- Rebuttal 7: Title: Response to Reviewer Comment: We sincerely appreciate your response, and we are delighted to have addressed some of your concerns. We hope to better align our contributions with your understanding. **Q1: loss function** $||\mathbf{x}_ 0 - \mathbf{x}_ \theta||$ is a simplified objective where we drop the coefficent (Equation 38 in Appendix C.3). Let's rewrite the expectation $\mathbb{E}_{\tilde{\mathbf{x}}_t}[\tilde{u}_t(\tilde{\mathbf{x}}_t|\mathbf{x}_0)]$ in the sum form, we have: $$ \underbrace{\sum p(\tilde{\mathbf{x}}_ t) \tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0)}_ {\mathbb{E}_ {\tilde{\mathbf{x}}_ t}[\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0)]} = \sum p(\tilde{\mathbf{x}}_ t) \left[\mathbf{u}'\mathbf{x}_ 0+ \mathbf{v}'\boldsymbol{\epsilon}\right]\underbrace{\frac{T-r\times G(\mathbf{x}_ 0,\boldsymbol{\epsilon})}{T}}_ {0<\text{coefficient}<1} \leq \sum p(\tilde{\mathbf{x}}_ t) \left[\mathbf{u}'\mathbf{x}_ 0+ \mathbf{v}'\boldsymbol{\epsilon}\right] = \underbrace{\left[\mathbf{u}' \sum p(\tilde{\mathbf{x}}_ t) \mathbf{x}_ 0 + \mathbf{v}'\boldsymbol{\epsilon}\right]}_ {\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbb{E}_ {\tilde{\mathbf{x}}_ t}[\mathbf{x}_ 0])}, $$ where $\mathbf{x}_ 0$ is the function of $\tilde{\mathbf{x}}_ t$ and $\frac{T-r\times G(\mathbf{x}_ 0,\boldsymbol{\epsilon})}{T}$ is a dynamic coefficient ranging from $0$ to $1$. Our current objective minimizes the upperbound of $\min \mathbb{E}||\tilde{u}_t - \tilde{u}^\theta_t||$, which is also effective. As illustrated in Equation 38 of Appendix C.3, we simplify the training objective and drop the coefficient during training, which is similar to the equation 14 in DDPM. This coefficient works as to dynamically assign weights to different samples based on the noise and x0. Therefore, if we want to make $\mathbb{E}_ {\tilde{\mathbf{x}}_ t}[\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0)]=\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbb{E}_ {\tilde{\mathbf{x}}_ t}[\mathbf{x}_ 0])$, we need to calculate the dynamic coefficient during training, which may be time-consuming. **Q2: re-ranking** The transformer generates with a beam size of 5. We do not use sampling because 1. **We strictly follow our baselines that all of them are using the beam search generation.** 2. **Diffusion LMs are not stable enough to keep achieving good results with simple sampling like the transformer**, as illustrated in Appendix I and our baselines. They have the potential to generate high quality contents, but random initialization will have a great impact on the generated results. Transformers do not face this problem of randomness. Take IWSLT14 as an example. | | beam search | sampling | | --- | --- | --- | | transformer | 34.31 | 34.05 | | Ours Rerank | 35.02 | 33.30 | When we convert the beam search to sampling, there is a large drop for the reranked diffusion model. As illustrated in Table 1, we do not intend to claim that our method surpasses transformers, but rather to show that our method **has the potential to produce results comparable to transformers** in addition to outperforming existing diffusion models. **Q3: Table 3** We want to clarify that **the most closest and simplest baseline is the DDIM, not the Flow Matching.** Our method is a deterministic diffusion process or ODE with dicrete time step. We use the ODE framework to model the deterministic forward process (because DDPM or DDIM framework can not do this), so that our method is theoretically compatible with both the Diffusion process and Flow Matching. However, currently our method requires to predict $x_0$ in application and thus **reverses the diffusion process step by step**, which is actually a deterministic diffusion model, not a standard Flow Matching model. Therefore, the most closest and simplest baseline is the DDIM, not the Flow Matching. (The DDIM sampling of BitDiffusion is 11.37, slightly worse than Gaussian sampling) If we want to apply the flow matching to the same discrete embedding space, there are several problems: 1. If we predict $x_0$, the only difference to BitDiffusion is the optimal transport trajectory. As in the ablation study (Table 2) for language, this sometimes improves performance, but the effect is not large. Due to the high training cost, we currently only have results of 200K steps, where changing the trajectory to optimal transport does not make a significant difference. | | FiD| | --- | --- | | BitDiffusion repro | 22.12 | | BitDiffusion Flow | 21.05 | | Ours | 8.17 | | Ours Flow | 8.08 | 2. If we predict $\tilde{u}$, our method is currently not adaptable to this objective. As demonstrated in Q1-3.1, we have to solve the equation $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ to get the $\tau$ with respect to the change of $\mathbf{x}$ in real time, which is currently inefficient to apply. --- Rebuttal 8: Title: Response to Reviewer Comment: We want to supplement that since solving the equation $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ to get the $\tau$ with respect to the change of $\mathbf{x}$ in real time is difficult, this is why we use the equation 25 as an alternative. The equation 25 discretizes $t$ to finite steps and keeps track of previous $\tau$ to make it tractable. And therefore the discrete time step make our model the deterministic diffusion process rather than the ODE based flow matching with infinite continuous timesteps. In addition, if the reviewer still wants to know the performance of Flow Matching on binary coding, (although our method is not closely related to Flow Matching and is currently not efficient to the Neural ODE framework in application), we are training the model with the TorchCFM framework and will provide the results before the end of the discussion period. --- Rebuttal Comment 8.1: Title: Response to Authors Comment: Thank you for the continued and detailed engagement with my questions. I appreciate this new analysis of the conditional vector field. As you have shown $\tilde{u}_t(\tilde{x}_t | \mathbb{E}[ x_0 | \tilde{x}_t])$ (the quantity you use during sampling) is an upper bound on $\mathbb{E} [ \tilde{u}_t (\tilde{x}_t | x_0) | \tilde{x}_t]$ (the quantity that you actually need to be using to be performing principled generative modelling). This is quite concerning since even if you train the network perfectly, you are not targeting the required quantity. I think this analysis should be included in the paper and it should be clearly stated that this departs from standard generative modelling objectives. You should also add discussion about why you still expect this to work, it is entirely not clear now what your framework is doing. I understand better now your experimental contributions and see that BitDiffusion is indeed very much like your method but without the boundary considerations. In light of your good experimental results, it seems that your method does have promise even if I don't believe right now it is well justified. I think the paper is quite borderline because of the unclear derivation of the methodology but good experimental results. I will raise my score to 5. --- Rebuttal 9: Title: Response to Reviewer Comment: We sincerely appreciate your response, and we are delighted to have addressed most of your concerns. We attach great importance to your suggestions and conduct some analytical experiments for better understanding. **Q: Why our objective works?** 1. In practice, errors come not only from the theory but also from the capability of neural networks. And **the error caused by optimizing the upper bound is much smaller than the error from the neural network.** Therefore, **the theoretical error caused by optimizing the upper bound** is negligible. The symbols in the formula may sometimes be confusing, and substituting them into numbers will lead to a more intuitive understanding. We add an additional analysis experiment on IWSLT14 to reveal the thinking behind our formula. | Objective | $\mathbb{E}_ {\tilde{\mathbf{x}}_ t}\Vert\mathbf{x}_ 0 - \mathbf{x}_ \theta\Vert$ | $\mathbb{E}_ {\tilde{\mathbf{x}}_ t}\Vert\tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ \theta)\Vert$ | $\mathbb{E}_ {\tilde{\mathbf{x}}_ t} [p(\mathbf{x}_ \theta \in C_{\mathbf{x}_0})] $ | BLEU (BLEU-1/2/3/4)| | --- | --- | --- | --- | --- | | $\Vert\mathbf{x}_ 0 - \mathbf{x}_ \theta\Vert$ | 8.4358 | 1.5613 | 51.81% | 33.42 (68.0/42.0/27.7/18.6) | | $\Vert\tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ \theta)\Vert$ | 8.4061 | 1.5518 | 52.34% | 33.49 (68.0/41.9/27.7/18.6) | We demonstrate the Expectation of $\mathbf{x}_ 0$'s and $\tilde{u}_ t$'s errors on the test set. It is easy to observe that, with the dynamic coeffficient $\frac{T-r\times G(\mathbf{x}_0, \boldsymbol{\epsilon})}{T}$, the value of $\mathbf{x}_0$'s error (8.4358) is much larger (than 1.5613). This supports Lines 180-182 that predicting $\tilde{v}_t^\theta$ will amplify the error on $\mathbf{x}_0$, whereas predicting $\mathbf{x}_0$ will reduce the error of neural networks on $\tilde{u}_t$. **Therefore, predicting $\mathbf{x}_0$ is beneficial to reducing the impact of the prediction error of the neural network compared with $\tilde{u}_t$**. In addition, we convert the objective to $\Vert\tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ 0) - \tilde{u}_ t(\tilde{\mathbf{x}}_ t\vert\mathbf{x}_ \theta)\Vert$ where the neural network still predicts $\mathbf{x}_0$. We find that the error expectations of $\mathbf{x}_0$ and $\tilde{u}_t$ decrease 0.0297 (0.35%) and 0.0095 (0.61%), respectively. This means that **the error caused by optimizing the upper bound is basically negligible compared to the error of the neural network.** Furthermore, predicting the upper bound has almost no impact on the final performance (0.07 of the BLEU score). 2. **Discrete modeling may be more robust to neural network errors than the continuous modeling.** We compare the DDPM and BitDiffuison with different image embedding types over the eval set of cifar10. | Models | $\mathbb{E}_ {\tilde{\mathbf{x}}_ t} [p(\mathbf{x}_ \theta\in C_ {\mathbf{x}_ 0})]$ | | --- | --- | | DDPM | 1.18% | | BitDiffusion | 25.74% | For continuous embedding, each pixel is represented with a float number in [-1,1], where the predictions will be discretized to integers in [0,255]. It is really hard to recover the original pixels with only one step (1.18%). For binary coding, where the pixel representation is an 8-dimensional vector, it is more easy to recover $\mathbf{x}_0$ due to a larger absorption state (25.74%). **This means the discrete embedding space can be more robust to errors of neural networks.** (*It's worth noting that although discrete embedding is more robust, it weakens the continuity between adjacent pixels and currently cannot exceed the performance of continuous models.*) --- Rebuttal 10: Title: Response to Reviewer Comment: We would like to express our sincere gratitude to you again. Your suggestions are of great help in improving the quality of our paper. We believe the following minor revisions will improve the clarity of our paper. 1. In **Training Objective** part of Section 3.3, we will change the equation 24 to a quick introduction of the $\tilde{u}_ t$ and move the equation to this part. Then we will add the derivation from $\Vert\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ 0)-\tilde{u}_ t(\tilde{\mathbf{x}}_ t|\mathbf{x}_ \theta)\Vert$ to $\Vert\mathbf{x}_ 0-\mathbf{x}_ \theta\Vert$ as discussed above, including the upper bound. Therefore, our method covers the entire framework of the Flow Matching in theory. 2. In **Reverse Process** part of Section 3.3, we will add an explaination that *solving the equation $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ to get the $\tau$ with respect to the change of $\mathbf{x}$ in real time is difficult*. Therefore, the theory and practice are distinguished in this part, where we explain how to implement our method. 3. In Section 4, we will add analyses about the expectation of errors as demonstrated above. 4. Other derivations and discussions that do not affect reading will be added to the appendix. --- Rebuttal Comment 10.1: Title: Response to Authors Comment: I appreciate the empirical analysis of the error induced when switching from the $u_t$ based objective to the $x_0$ based objective. I also think the paper's clarity will be improved greatly including this derivation of the loss in the main with its relation to flow matching, it will definitely help the reader's understanding with your method. I will raise my score to 6 due to my main concern of the unclear derivation of the method being mostly resolved. --- Reply to Comment 10.1.1: Title: Response to Reviewer Comment: Thank you again for your patience and kindness, your suggestions and guidance have been very helpful.
Summary: In this study, the authors propose a framework to address the discrepancy between discrete data and continuous modeling in diffusion processes. The approach involves a two-step forward process that first estimates the boundary as a prior distribution, then rescales the forward trajectory to build a boundary conditional diffusion model. The reverse process is proportionally adjusted to ensure the learned contours yield more accurate discrete data. Experimental results show the method's strong performance in language modeling and discrete image generation tasks, outperforming previous models in several areas and establishing a new state-of-the-art for categorical image generation on the Cifar-10 dataset. Strengths: - The authors have conducted comprehensive experiments, providing thorough and complete details in their exploration. - The novelty of the study lies in its discussion of the concept of "discrete area", presenting a fresh perspective in the field. Weaknesses: - The explanation as to why this new framework can enhance performance is not clearly articulated in the study. - From my understanding, Algorithm 1 appears to be equivalent to training original diffusion models with $t\sim U[t_0,T]$ as opposed to $t\sim U[1,T]$. Technical Quality: 2 Clarity: 3 Questions for Authors: - Is there a possibility that the boundaries of the discrete areas of two distinct data points might intersect? If so, why would pushing the noise to the boundary be beneficial? Could certain perturbations potentially lead to inaccuracies in the final result? - According to Equation (14), shouldn't the input of $\Psi^{-1}([\cdot,\cdot])$ be the boundary point and the "crossing boundary time"? If so, why does line 8 of Algorithm 2 suggest that a sequence of $\tau$ ranging from $T$ to $r\times t_0$ can serve as the input of $\Psi^{-1}([\cdot,\cdot])$? - also please refer to weaknesses. If these questions are addressed, I would consider revising the score. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors do acknowledge the limitations of their work, which is commendable. Furthermore, I recommend conducting theoretical analysis, such as exploring the convergence properties of their new framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your meticulous review and the valuable comments. Please kindly find our response below. **Q1: Why can our framework enhance performance** 1) **From the perspective of Motivation** As illustrated in Introduction and Figure 1, we find that the continuous diffusion processes oversimplify the objective of discrete modeling. The discrete data are treated as single points, while they are actually areas in the diffusion space. This means diffusion models learned on the oversimplified objective generate the sub-optimal probability density contour, which is inconsistent with the discrete priors (Lines 38-41 and Figure 1A). Therefore, **our method takes the discrete prior into consideration and design a more appropriate diffusion trajectory for discrete modeling problems** (Lines 42-65 and Figure 1B). 2) **From the perspective of Methodology** We theoretically derive the diffusion process conditioned on the discrete priors and obtain the sampling distribution of the corresponding forward process $\tilde{p}_t(\mathbf{x}|\mathbf{x}_0)$ (Equations 21 and 22). It is easy to calculate that the $\tilde{p}_0(\mathbf{x}|\mathbf{x}_0)$ is exactly the discrete boundary and $\tilde{p}_T(\mathbf{x}|\mathbf{x}_0)$ is a Gaussian distribution. During the inference stage, i.e., reversing the time step from $T$ to $0$, the density probability will gradually be consistent with the contour when approaching the boundary, which is inline with Lines 46-51 and Figure 1B. Hence, **our method will precisely guide the random noise into the discrete area.** Figure 2 demonstrates sample points exactly calculated by our proposed method (Equation 22) and the probability contours calculated by our method can be easily adapted to different discrete boundaries and noising trajectories, which are inline with our expectations in Figure 1 of the Introduction Section. 3. **From the perspective of Experiment** Our framework is constructed on Difformer and BitDiffusion with the same configurations for both language and image generation tasks. As in Table 1 and 3, **our method demonstrates significant improvements over them**. Besides, ablations in table 4 reveal that, with the increase of the confidence factor $r$, i.e., increasing the influence of the discrete prior, the discrete modeling performance improves in all aspects. **Q2: Algorithm 1** You can consider $t\sim U[t_0,T]$. However, $t_0$ is not a constant but a random variable $t_0=G(\mathbf{x}_0,\boldsymbol{\epsilon})$ as illustrated in equation 12. **Sampling the $t$ from $U[1,T]$ and then transform it to $\tau$ with $\mathcal{T}(t,G(\mathbf{x}_0,\boldsymbol{\epsilon}))$ is equivalent to directly sampling $\tau$ from $U[G(\mathbf{x}_0,\boldsymbol{\epsilon}), T]$.** We choose the former procedure because it is easy to perform in parallel. Algorithm 1 is just a commonly used element substitution technique in sampling. **Q3: Boundaries of discrete areas** *Due to the character limit of the rebuttal, this is only an outline of our answer. Please refer to the comment for more details.* 1. **The discrete area of two distinct data points will not intersect.** As illustrated in Lines 34-37 and Equation 6, points in the discrete area will not be attributed to any other area by definition. 2. **Boundaries of two areas can overlap with the confidence factor $r=1$, but this is impossible when $r<1$.** When the confidence factor $r=1$, as illustrated in Equations 7 and 8, the boundary is where the probabilities of points belonging to the neighbour areas equal. When the confidence factor $r$ decreases, the range of each discrete area is gradually reduced and there is no overlap between boundaries. 3. **Pushing the noise to the boundary is beneficial.** - **Empiricaly**, previous work like Difformer and Dinoiser and our Table 1 have revealed that **increasing the minimal noise scale can benefit the performance** (Lines 217-219). And it is obvious that **pushing the noise to the boundary is precisely what increases the minimal noise scale**. - **Theoretically**, probability density inside the discrete area can be ignored. Based on the observation of the discrete area that any point inside this area will be attributed to the corresponding discrete data (Lines 34-37), we can easily get the derivation: **we don't need to model the probability density inside the discrete area because our diffusion model only needs to guide a noise to get into this area, where the accurate end position doesn't matter.** Therefore, pushing the noise to the boundary does not cause theoretical damage to the probability modeling. And the dffusion model can be less disrupted by invalid data, i.e., points inside the area, during the training stage. 4. **Certain perturbations can lead to inaccuracies in the final result without our confidence factor.** We use the confidence factor $r$ (Lines 137-147) to control the influence of perturbations. When we set a smaller $r$, the learned reverse process will be robust to perturbations. **Q4: Input of function $\Psi^{-1}([\cdot,\cdot])$** **The input of $\Psi^{-1}([\cdot,\cdot])$ is valid for any arbitrary pair of point $\tilde{\mathbf{x}}_t$ and time $\tau$.** Functions $\Psi$ and $\Psi^{-1}$ in Equation 14 are directly extended from Equations 4 and 5, which construct the invertible relationship between any point-time pair ($\tilde{\mathbf{x}}_t$-$\tau$) and their initial noise $\boldsymbol{\epsilon}$. In Equation 14 of Section 3.1, we take the boundary point $\mathbf{x}_{t_0}$ and the crossing boundary time $t_0$ pair as input because we just want to estimate this boundary. However, this does not mean the input of $\Psi^{-1}$ can only be the pair of boundary point and crossing boundary time, while any other point-time pairs are still valid. This is why we use $\Psi^{-1}$ to update the trajectory in Line 8 of Algorithm 2. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed reply. Now, I have no concerns on the motivation of this work. The motivation is reasonable. But I still feel confused towards the design of the sampling (Algorithm 2), can you provide more illustration on this? It is beneficial to give more clear presentation of this. --- Rebuttal 2: Title: Detailed answer for Q3: Boundaries of discrete areas Comment: **Q3: Boundaries of discrete areas** 1. **The discrete area of two distinct data points will not intersect.** As illustrated in Lines 34-37 and Equation 6, points in the discrete area will not be attributed to any other area by definition. Take language modeling as an example, the discrete area of a token 'A' is the set of all points in the embedding space with the largest dot similarity to token 'A' than other tokens in the vocabulary. This means almost any point in the embedding space will be attributed to only one token. When the dot similarities of a point to different tokens equal, it means the point lies at the boundary of these discrete areas. Besides, the discrete area is convex. This means all points within the boundary will only belong to this area, i.e., we can safely discribe the area with boundaries as in Section 3.1. The quick proof of convex is: > Suppose the embedding of token 'A' is $\mathbf{x}_0$, given two points $\mathbf{y}^0$ and $\mathbf{y}^1$ in the embedding space. > > If $\mathbf{y}^0$ and $\mathbf{y}^1$ are inside the discrete area of token 'A' and for all embeddings $\mathbf{x}_i$ of other tokens, > > there is: $\mathbf{x}_0\cdot\mathbf{y}^0\geq\mathbf{x}_i\cdot \mathbf{y}^0$ and $\mathbf{x}_0\cdot\mathbf{y}^1\geq\mathbf{x}_i\cdot \mathbf{y}^1$. > > For any other points $\hat{\mathbf{y}} = t \ \mathbf{y}^0 + (1-t)\ \mathbf{y}^1, t\in[0,1]$, > > there is $\mathbf{x}_0\cdot\hat{\mathbf{y}} = t(\mathbf{x}_0\cdot\mathbf{y}^0)+(1-t)(\mathbf{x}_0\cdot\mathbf{y}^1)\geq \mathbf{x}_i\cdot\hat{\mathbf{y}}$. > > Therefore, the discrete area is convex. 2. **Boundaries of two areas can overlap with the confidence factor $r=1$, but this is impossible when $r<1$.** When the confidence factor $r=1$, as illustrated in Equations 7 and 8, the boundary is where the probabilities of points belonging to the neighbour areas equal. Consider a simplified situation of only two tokens 'A' and 'B', which is the binary classification, the boundary of token 'A' overlaps the boundary of token 'B', where $p(\mathbf{x} \in A)=p(\mathbf{x}\in B) = 0.5$. When the confidence factor $r$ decreases, the range of each discrete area is gradually reduced. For example, we may take $p(\mathbf{x} \in A)=0.7 > p(\mathbf{x}\in B)$ as the boundary of 'A' and $p(\mathbf{x} \in B)=0.7$ as the boundary of 'B'. In this case, $0.3<p(\mathbf{x} \in A)<0.7$ is an unattributed area and there is no overlap between the boundaries of 'A' and 'B'. 3. **Pushing the noise to the boundary is beneficial.** - **Empiricaly**, previous work like Difformer and Dinoiser and our Table 1 have revealed that **increasing the minimal noise scale can benefit the performance** (Lines 217-219). In both their experiments and our Table 1, comparing to the original Diffusion LMs like DiffuSeq and SeqDiffuSeq, increasing the minimal noise scale (difformer and dinoiser) can improve the performance. And it is obvious that **pushing the noise to the boundary is precisely what increases the minimal noise scale**. - **Theoretically**, probability density inside the discrete area can be ignored. > Let’s make a simple analogy. If we want to launch a satellite around the moon, we can treat the moon as a point mass. This is similar to the traditional continuous diffuison process. When it comes to landing on the moon, we have to take into account the shape of the moon, while the gravity field inside the moon does not matter. This is what our discrete problem looks like. Based on the observation of the discrete area that any point inside this area will be attributed to the corresponding token (Lines 34-37), we can easily get the derivation: **we don't need to model the probability density inside the discrete area because our diffusion model only needs to guide a noise to get into this area, where the accurate end position doesn't matter.** Therefore, pushing the noise to the boundary does not cause theoretical damage to the probability modeling. And the dffusion model can be less disrupted by invalid data, i.e., points inside the area, during the training stage. 4. **Certain perturbations can lead to inaccuracies in the final result without our confidence factor.** We use the confidence factor $r$ (Lines 137-147) to control the influence of perturbations. Given the above example of only two tokens 'A' and 'B'. When the confidence factor $r=1$, points $\mathbf{x}$ on the boundary of 'A' follow $p(\mathbf{x}\in A)=p(\mathbf{x} \in B) = 0.5$. Therefore, any perturbation to the prediction of the learned diffusion model may lead to a reverse of the attribution of $\mathbf{x}$. To alleviate this problem, we can decrease the factor $r$ so that points on the boundary have higher confidence, e.g., $p(\mathbf{x}\in A)=0.7>p(\mathbf{x}\in B)$, which is more robust to perturbations. In practice, $r=1$ works well for tasks with strong conditions and we should decrease $r$ for unconditional tasks. --- Rebuttal 3: Title: Response to Reviewer Comment: We sincerely appreciate your response, and we are delighted to have addressed some of your concerns. **Q: The design of sampling (Algorithm 2)** Lets start from the sampling process of DDIM under our symbol denotion. > 1. $\mathbf{x}_ T \sim \mathcal{N}(0,1)$ > 2. for $t=T,\dots,1$ do > 3. ` `$\hat{\mathbf{x}}_ 0 = \mathbf{x}_\theta(\mathbf{x}_t, t)$ (*Pseudo Target*) > 4. ` `$\boldsymbol{\epsilon} = \frac{\mathbf{x}_t - \mathbf{u}_t\hat{\mathbf{x}}_0}{\mathbf{v}_t}$ (*Trajectory Alteration*) > 5. ` `$\mathbf{x}_ {t-1} = \mathbf{u}_ {t-1}\hat{\mathbf{x}}_ 0 + \mathbf{v}_ {t-1}\boldsymbol{\epsilon}$ (*Previous Sample*) This is a synchronous trajectory update where the algorithm directly updates the start point $\boldsymbol{\epsilon}$ of the trajctory based on the current prediction $\hat{\mathbf{x}}_0$. Our method requires one more step of timestep rescaling $\tau = \mathcal{T}(t, G(\hat{\mathbf{x}}_ 0,\boldsymbol{\epsilon}))$, which takes the current trajctory start point as the input. However, the trajectory alteration (step 4 above) also requires the current timestep $\tau$ as the input $\hat{\boldsymbol{\epsilon}} = \frac{\tilde{\mathbf{x}}_ t - \mathbf{u}_ \tau\hat{\mathbf{x}}_ 0}{\mathbf{v}_ \tau}$. Therefore, our procedure can not be synchronous and we must choose $\tau$ or $\hat{\boldsymbol{\epsilon}}$ to be asynchronous. As discussed with reviewer 1VFK (Q3.3 fixing the initial path), we find that even discarding the step of Trajectory Alteration leads to almost no performance degradation. (*1. From the perspective of uncertainty, if the learned diffusion model predicts a different $\mathbf{x}_0$, we have to update the original noise because currently we are in a different trajectory. If the learned model keeps changing the predicted target, which means it is uncertain about where to go, updating the trajectory correspond to an unreliable target may not be beneficial for the denoising process. If the learned model rarely changes its prediction, it means we are on the right path and no updates are needed. Therefore, it would be a viable option to discard this step of Trajectory Alteration to make decoding faster. 2. From the perspective of experiments, as illustrated in Table 3, if we train with the rescaled forward process and decode with unrescaled trajectory, there will be only a slight performance degradation. The performance improvement of our model mainly comes from the training process, and some small changes in the decoding process may not have much impact. In practical application, we just discard the trajectory update for simplicity in our experiments and find almost no performance degradation.*) Therefore, we choose $\hat{\boldsymbol{\epsilon}}$ to be asynchronous and the steps in the for loop (line 2) becomes: > 3. ` `$\hat{\mathbf{x}}_ 0 = \mathbf{x}_\theta(\tilde{\mathbf{x}}_t, t)$ (*Pseudo Target*) > 4. ` `$\tau = \mathcal{T}(t-1, G(\hat{\mathbf{x}}_ 0,\hat{\boldsymbol{\epsilon}}))$ (*Trajctory Rescaling*) > 5. ` `$\tilde{\mathbf{x}}_ {t-1}=\mathbf{u}_ \tau \hat{\mathbf{x}}_ 0 + \mathbf{v}_ \tau \hat{\boldsymbol{\epsilon}}$ (*Previous Sample*) > 6. ` `$\hat{\boldsymbol{\epsilon}} = \frac{\tilde{\mathbf{x}}_ {t-1} - \mathbf{u}_ \tau\hat{\mathbf{x}}_ 0}{\mathbf{v}_ \tau}$ (*Asynchronous Trajectory Alteration*) where we add the Trajctory Rescaling step and move the Asynchronous Trajectory Alteration to the end. Other steps in Algorithm 2 is to keep track of current $\tau$ and $\hat{\boldsymbol{\epsilon}}$. It's worth noting that we can not loop the $\tau$ as $t$ because $\tau$ is dynamic and different for each pair of $\mathbf{x}_0$ and $\boldsymbol{\epsilon}$. In the same iteration round, different $\mathbf{x}_0$ corresponds to different $\tau$. Therefore, Algorithm 2 can complete the iterative prediction of $\mathbf{x}_0$ with subtal errors. And in the analytical experiments discussed with the reviewer 1VFK, the errors in the sampling stage have almost no impact on the final results. (*There is a typo in Algorithm 2 that $\tilde{\mathbf{x}}_ t$ in Line 6 should be $\tilde{\mathbf{x}}_ {t-\Delta t}$*) --- Rebuttal 4: Title: Response to Reviewer Comment: We hope to supplement the above description in a more vivid way. The sampling process of DDIM can be demonstrated as: $$ \begin{aligned} \mathbf{x}_ \theta ^{t+1} & \\\\ \searrow &\\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \end{aligned} \Rightarrow \begin{aligned} &\quad \mathbf{x}_ \theta ^ {t}\\\\ &\nearrow \\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \end{aligned} \Rightarrow \begin{aligned} &\qquad \ \mathbf{x}_ \theta ^ t\\\\ & \qquad \ \downarrow \\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \rightarrow \boldsymbol{\epsilon}' \end{aligned} \Rightarrow \begin{aligned} &\qquad \ \mathbf{x}_ \theta ^ t\\\\ & \qquad \ \downarrow \quad \searrow \\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \rightarrow \boldsymbol{\epsilon}' \rightarrow \mathbf{x}_ {t-1} \end{aligned} $$ In this process, the reverse trajectory is $\boldsymbol{\epsilon} \rightarrow \mathbf{x}_ t \rightarrow \mathbf{x}_ 0$. When the prediction of $\mathbf{x}_ 0$ changes, the trajectory is updated with $\mathbf{x}_ t$ as the fixed point $\boldsymbol{\epsilon}' \rightarrow \mathbf{x}_ t \rightarrow \mathbf{x}_ \theta$, which can be demonstrated as: $$ \begin{aligned} \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \rightarrow \mathbf{x}_ 0 \end{aligned} \Rightarrow \begin{aligned} &\qquad\ \mathbf{x}_ \theta\\\\ &\quad \nearrow \\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \rightarrow \mathbf{x}_ 0 \\\\ \nearrow&\\\\ \boldsymbol{\epsilon}' \quad\ & \end{aligned} \Rightarrow \begin{aligned} &\qquad\qquad\ \mathbf{x}_ {\theta}\\\\ &\qquad \quad \nearrow \\\\ &\qquad\ \mathbf{x}_ {t-1}\\\\ &\quad \nearrow \\\\ \boldsymbol{\epsilon} \rightarrow &\mathbf{x}_ t \rightarrow \mathbf{x}_ 0 \\\\ \nearrow&\\\\ \boldsymbol{\epsilon}' \quad\ & \end{aligned} $$ Therefore, DDIM algorithm can be simplified as: > 1. $\mathbf{x}_ T \sim \mathcal{N}(0,1)$ > 2. for $t=T,\dots,1$ do > 3. ` `$\hat{\mathbf{x}}_ 0 = \mathbf{x}_\theta(\mathbf{x}_t, t)$ (*Pseudo Target*) > 4. ` `$\mathbf{x}_ {t-1} = \mathbf{u}_ {t-1}\hat{\mathbf{x}}_ 0 + \mathbf{v}_ {t-1}\frac{\mathbf{x}_t - \mathbf{u}_t\hat{\mathbf{x}}_0}{\mathbf{v}_t}$ (*Previous Sample*) In our algorithm 2, the timestep rescaling $\tau = \mathcal{T}(t, G(\mathbf{x}_ 0, \boldsymbol{\epsilon}))$ takes both $\mathbf{x}_ 0$ and $\boldsymbol{\epsilon}$ as the input. **This means errors in the predicted $\mathbf{x}_ \theta$ will be maginified if we update $\boldsymbol{\epsilon}$ before rescaling the timestep.** Therefore, there will be two different solutions: 1. One is our implemention, where we directly discard the Trajectory Alteration step, and the fixed point of our reverse trajectory is $\boldsymbol{\epsilon}$: $$ \begin{aligned} \boldsymbol{\epsilon} \rightarrow &\tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \end{aligned} \Rightarrow \begin{aligned} &\qquad \qquad \mathbf{x}_ \theta\\\\ &\qquad \ \ \ \nearrow\\\\ &\qquad \tilde{\mathbf{x}}_ t' \\\\ &\ \ \nearrow \\\\ &\boldsymbol{\epsilon} \rightarrow \tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \\\\ \end{aligned}\Rightarrow \begin{aligned} &\qquad\qquad\qquad \mathbf{x}_ \theta\\\\ &\qquad\qquad \ \ \ \nearrow\\\\ &\qquad \qquad \tilde{\mathbf{x}}_ {t-1}\\\\ &\qquad \ \ \ \nearrow\\\\ &\qquad \tilde{\mathbf{x}}_ t' \\\\ &\ \ \nearrow \\\\ &\boldsymbol{\epsilon} \rightarrow \tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \\\\ \end{aligned} $$ It is worth noting that, in this situation, the Trajectory Alteration step is just like a place holder and does not change the value of $\boldsymbol{\epsilon}$, because the Previous Sample (Step 5) and Asynchronous Trajectory Alteration (Step 6) are exaclty the same equation. 2. The Asynchronous Trajectory Alteration works on the other solution where we still take $\tilde{\mathbf{x}}_ t$ as the fixed point. In this situation, the Previous Sample step is $\tilde{\mathbf{x}}_ {t-1} = \mathbf{u}_ \tau\hat{\mathbf{x}}_ 0 + \mathbf{v}_ \tau \frac{\tilde{\mathbf{x}}_ t-\mathbf{u}_ {\tau_ \text{prev}}\hat{\mathbf{x}}_ 0}{\mathbf{v}_ {\tau_ \text{prev}}}$. This means we calculate the $\tau$ with $\boldsymbol{\epsilon}$ but generate $\tilde{\mathbf{x}}_ {t-1}$ with the updated $\boldsymbol{\epsilon}'$. The corresponding reverse trajectory is: $$ \begin{aligned} \boldsymbol{\epsilon} \rightarrow &\tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \end{aligned} \Rightarrow \begin{aligned} &\qquad\ \mathbf{x}_ \theta\\\\ &\quad \nearrow \\\\ \boldsymbol{\epsilon} \rightarrow &\tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \\\\ \nearrow&\\\\ \boldsymbol{\epsilon}' \quad\ & \end{aligned} \Rightarrow \begin{aligned} &\qquad\qquad\ \mathbf{x}_ {\theta}\\\\ &\qquad \quad \nearrow \\\\ \boldsymbol{\epsilon} &\rightarrow \ \tilde{\mathbf{x}}_ {t-1}\\\\ &\quad \nearrow \\\\ \boldsymbol{\epsilon} \rightarrow &\tilde{\mathbf{x}}_ t \rightarrow \mathbf{x}_ 0 \\\\ \nearrow&\\\\ \boldsymbol{\epsilon}' \quad\ & \end{aligned} $$ --- Rebuttal Comment 4.1: Title: Response to Reviewer Comment: As mentioned in Lines 195-196 of our paper and discussion with reviewer 1VFK, there is almost no performance gap between the two different fixed point solutions of our algorithm. Therefore, we choose the former one which is simple and stable. Besides, the Asynchronous Trajectory Alteration step (Line 8 in Algorithm 2) is optional in practice. --- Rebuttal 5: Title: Response to Reviewer Comment: Thank you for your patience and kindness, your suggestions and guidance have been very helpful. We will make the following minor revisions to make it clearer. 1. Replace the current $\Delta t$ iteration into an easier notation. 2. Add the explaination about the Asynchronous Conflict between $\tau$ and $\epsilon$ and trajectory alteration strategy as discussed above. 3. Add comparison with DDIM in the appendix for better understanding. 4. Other derivations and discussions that do not affect reading will be added to the appendix. We believe these revisions can improve the clarity of our algorithm and we would like to express our sincere gratitude to you again. Besides, the asynchronous conflict problem is the same as the problem of $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ discussed with reviewer 1VFK. > In **Reverse Process** part of Section 3.3, we will add an explaination that *solving the equation $\tau=\mathcal{T}(t,G(\mathbf{x}_0,\frac{\mathbf{x}-\mathbf{u}(\mathbf{x}_0,\tau)\mathbf{x}_0}{\mathbf{v}(\mathbf{x}_0,\tau)}))$ to get the $\tau$ with respect to the change of $\mathbf{x}$ in real time is difficult*. Therefore, the theory and practice are distinguished in this part, where we explain how to implement our method. Therefore, we will combine these two explanations.
Summary: A discrepancy exists when using continuous diffusion models to model discrete data, which has not been sufficiently addressed in past methods. This paper redesigns the forward and backward processes of the diffusion model to eliminate this issue. Experiments on translation tasks and CIFAR-10 effectively validate the proposed method's effectiveness. Strengths: 1. The writing is clear and concise. 2. This paper aims to address an important and interesting question. 3. Experiments demonstrate that the proposed method surpasses all baselines, effectively validating its effectiveness. Weaknesses: Please see the Questions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Sec. 4, the authors present an untrainable $p(W|x_0)$ given the embedding layer. If we employ a trainable $p_{\theta}(W|x_0)$ (e.g., Diffusion-LM [1]), will the discrepancy between discrete data and continuous modeling still exist? If it does, will your proposed method still bring improvements in this scenario? 2. This paper presents the results of translation tasks. I am curious to know whether the proposed method would be effective for models like Plaid [2], which can compute text likelihood. The discrepancy between discrete data and continuous modeling mainly occurs during the step of decoding the embedding into text, while the likelihood calculation does not require this decoding step. 3. In Table 3, the authors report FIDs of 51.27 and 30.97 for D3PM. However, the best generative FID on CIFAR-10 in the original D3PM paper is 7.34. Why didn't the authors report this result as a baseline? This requires further explanation. [1] Diffusion-LM Improves Controllable Text Generation [2] Likelihood-Based Diffusion Language Models Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your meticulous review and the valuable comments. Please kindly find our response below. **Q1: The embedding layer** 1) We want to clarify that the embedding layer we use is actually trainable, as illustrated in Lines 200 and 208. The $\theta$ is only used to denote the diffusion parameters in our paper. 2) The discrepancy between discrete data and continuous modeling exists for both fixed and trainable embedding layers. 3) Besides, we are fully aware that it has been demonstrated in Diffusion-LM that a trainable embedding layer achieves better performance than the fixed one. **Q2: Application scenarios of our framework** 1) There seems to be an embedding layer in Plaid as well (Section 3.1 in Plaid), so our framework is theoretically applicable. Likelihood calculation is also a similarity function that has the same effect as the dot product in our work, which is compatible with the $f(\mathbf{x},\mathcal{I})$ framework we defined. 2) Besides, as illustrated in Lines 28-30, our framework is specifically designed for continuous diffuison models, where the discrete diffusion process with the discrete state space will not face the problem in Figure 1A and Lines 38-41. Therefore, our framework will extend the generality of the widely used continuous diffusion processes, but does no help for discrete diffusion processes. **Q3: FiD of D3PM** We want to clarify that we have reported the original D3PM in Table 3 with the FiD of 7.34, first line (D3PM GAUSS) in the part of Discrete Ordinal Pixels. As demonstrated in Table 3 and Section 5, we use different type of image embeddings. Different embedding type possesses different level of continuous information. The D3PMs of 51.27 and 30.97 FiDs are the UNIFORM and ABSORBING versions with Categorical Pixels. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I've decided to maintain my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment: Thank you for your response and kindness, your suggestions have been very helpful.
Summary: The paper discusses a novel approach to discrete modeling using boundary conditions and diffusion processes. The primary contributions include: - Development of a framework for discrete image generation that incorporates binary coding and pixel embedding. - Introduction of an intermediate state to illustrate the correlation between discreteness and modeling difficulty. - Extensive experimental evaluation on datasets like CIFAR-10, demonstrating competitive results compared to state-of-the-art models. - Proposal of a new method for stochastic processes that improve model performance, especially in cases of categorical pixels. Strengths: - Originality: The paper introduces a unique method for addressing discrete modeling through a combination of binary coding and pixel embedding, which is innovative in the context of diffusion models. - Quality: The experimental setup is robust, with detailed evaluation metrics and comparisons to existing methods. The results show significant improvements, indicating the effectiveness of the proposed approach. - Clarity: The paper is well-structured, with clear explanations of the methodology and the underlying mathematical formulations. Figures and tables are used effectively to illustrate the results. - Significance: The approach has broad applicability in various fields requiring discrete data generation, such as image and language modeling. The improvements in FID scores and other metrics highlight the potential impact of this work on the research community. Weaknesses: 1. The approach is primarily tested on specific datasets like CIFAR-10. Expanding the evaluation to a wider range of datasets on high-resolution images could provide a more comprehensive assessment of the model's generalizability. 2. Language model experiments are mainly on translation tasks. Could you provide experiments on language modeling tasks, which is more common for benchmarking diffusion LMs. Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitation is mentioned in one sentence in the conclusion section, but not comprehensively discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your meticulous review and the valuable comments. Please kindly find our response below. **Q: Expanding datasets for images and languages** 1) For both language and image tasks, our framework is constructed on our baselines, Difformer and BitDiffusion, with the same configurations. We strictly follow their benchmarks and datasets to ensure a fair comparison. 2) Our experiments across languages and images may have reflected the generalizability of our framework. 3) Since training the image model takes a week and training the language model takes at least two days (lines 719-720), we do not have enough computing resources to scale to a larger and more complex benchmark. We sincerely accept your suggestions and will try to expand our method to larger benchmarks when we have enough computing resources. 4) We try to conduct the experiment on ImageNet 64, but our computing resources are currently insufficient to complete it. We show the temporary results of the first 100K steps as follows. | Binary Coding | FiD | | ---- | ---- | | BitDiffusion | 47.82 | | Ours | 31.68 |
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Right this way: Can VLMs Guide Us to See More to Answer Questions?
Accept (poster)
Summary: VLMs offer an interesting opportunity to describe their uncertainty if the images given to them are of poor quality. This paper explores a framework for estimating whether a VLM can identify if its ability to correctly answer a visual question would improve were it given a better picture. The paper introduces a new labeled dataset for VQA where questions involve identifying how best to take better pictures. The paper introduces also a finetuning scheme with synthetically generated examples leveraging the labeled dataset. Strengths: - The problem is interesting, relevant, and important - The novel data set is an important contribution after it is released - The performance improvement after fine-tuning on LLaVA is impressive. Weaknesses: - It would be interesting to see how the model performs with different prompting techniques. Beyond vanilla CoT, can we try methods covered in (https://arxiv.org/pdf/2402.07927) - The problem specifically focuses on qualitative spatial reasoning with respect to moving the camera. Can this methodology be used for quantitative reasoning (e.g., Spatial-VLM, Spatial-RGPT), by identifying not just which direction, but by how much to move the camera? - The two perturbation ranges of 0.1-0.9 and 0.3-0.7 are confusing. What exactly are you differentiating with these categories? It would have been interesting to evaluate over disparate ranges (e.g., 0.1-0.3, 0.3-0.5, 0.5-0.7, …) so that we can see how performance drops with the perturbations Technical Quality: 3 Clarity: 3 Questions for Authors: - Why is LLaVA-1.5 13b worse than 7b? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are lightly discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the unique contribution of our study and the effectiveness of our training method. We believe these advancements highlight our work's potential to drive further research. We are glad to discuss the following points: ***W1. It would be interesting to see other method Beyond vanilla CoT*** **Response:** We agree that exploring diverse CoT methods for this task is meaningful. In our study, we experimented with a series of prompting techniques and presented the ones with the best performance based on our experiments. We will add more failure cases in the appendix in our next version to provide a comprehensive view of the challenges faced. As suggested by reviewer 1, we also made a complementary experiment with more complex zero-shot prompting techniques with the self-reflection mechanism. However, we also found that a more complex prompt does not guarantee a better performance, and that remains an interesting question. As prompt design remains an open question, we acknowledge that our settings may not fully tap the model's potential to achieve the best performance. Our primary aim was to test the model's baseline performance under general prompt settings, but we appreciate the suggestion to investigate further prompting methods as outlined in the referenced paper. We plan to explore these advanced techniques in future work to benchmark the current VLMs in a more comprehensive way. ***W2. Can this methodology be used for quantitative reasoning (e.g., Spatial-VLM, Spatial-RGPT), by identifying not just which direction, but by how much to move the camera?*** **Response:** We acknowledge that our current methodology focuses on qualitative spatial reasoning and recognize this as a limitation. Theoretically, our guidance framework can be extended to more general and quantitative scenarios. By simulating spatial drift, we can customize the ratio of drift and produce synthetic training data with quantitative values. This potential extension would allow models to identify not just the direction but also the extent of camera movement required. We plan to explore this in future work. In our current task, we focused on assisting visually impaired individuals, which informed our data and question design. We assumed it would be challenging for users to accurately move the camera by a precise quantitative value. However, we agree that quantitative guidance would be particularly meaningful in applications such as robotics, where precise tracking of target objects is crucial. One challenge lies in calculating the ground-truth for the extent of movement needed. We are actively working on this direction, leveraging simulated environments such as Ge, Yunhao, et al. "BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation, CVPR 2024." ***W3. The two perturbation ranges of 0.1-0.9 and 0.3-0.7 are confusing. What exactly are you differentiating with these categories? It would have been interesting to evaluate over disparate ranges (e.g., 0.1-0.3, 0.3-0.5, 0.5-0.7, …) so that we can see how performance drops with the perturbations*** **Response:** We understand the confusion regarding the perturbation ranges. In our manuscript, we included a discussion on the rationale behind choosing the ranges of 0.1-0.9 and 0.3-0.7. (with * in the table below) These ranges were selected to balance between image quality and sufficient data generation. Severe perturbations (e.g., 0.9) can lead to significant information loss, making the images harder to interpret, while minimal perturbations (0.1) do not challenge the model enough, resulting in fewer positive guidance data points.. To explore the effect of different perturbation range settings, we present the supplementary experiment results below, LLaVA 7B Performance Across Different Perturbation Ranges | Perturbation Range | Overall F1 | Overall Accuracy | ACC(F) | |:------------------:|:----------:|:----------------:|:------:| | 0.1-0.3 | 0.19 | 0.24 | 0.31 | | 0.3-0.5 | 0.49 | 0.49 | 0.38 | | 0.5-0.7 | 0.49 | 0.49 | 0.43 | | 0.7-0.9 | 0.50 | 0.49 | 0.40 | | *0.1-0.9 | 0.57 | 0.58 | 0.44 | | *0.3-0.7 | 0.47 | 0.48 | 0.33 | These findings generally align with our discussion: as the value of the perturbation range increases, the overall accuracy and F1 score improve quickly and stay relatively stable around 0.49. However, if we focus on the accuracy on four reframing directions(ACC(F)), it improves as the perturbation range increases but drops when reaching the highest range (0.7-0.9). This demonstrates there might be a tradeoff between the diversity of directional training data and the complexity of perturbed samples. A broader range (0.1-0.9) may lead to a balance and generally good performance across metrics. However, we can obeserve a general improvement with all perturbation ranges on the ACC(F). We acknowledge that studying the optimized way to choose from different ranges of perturbations could provide additional insights into performance variations. This is an interesting direction for future work, and we will consider a more detailed analysis in our next version. ***Question:Why is LLaVA-1.5 13b worse than 7b?*** **Response:** Thank you for your question. We believe the difference might be due to the sensitivity of prompt settings, especially in a zero-shot scenario. Interestingly, we found that the larger model, LLaVA-1.5 13b, performed worse with a two-round prompt compared to a one-round prompt, highlighting the instability of prompt settings for different models. We believe this could be a valuable finding and points to the need for further research to understand and optimize prompting strategies across model sizes. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I have read the rebuttal and appreciate the author's clarifications, especially with regard to the perturbation ranges. I encourage the authors to include this discussion as well as some discussion on the effect of better prompting, CoT, etc. on the results already found. --- Reply to Comment 1.1.1: Title: Thanks for the response Comment: Thank you for your feedback. Could you kindly clarify whether you are suggesting revisions to the paper's discussion section, or if you would prefer us to engage in further discussion during the current reviewer-author discussion period? We plan to include our discussion on general perturbation ranges, diverse prompting and CoT strategies in the paper's discussion section in the next revision. We will also provide more details in the supplementary materials to demonstrate the sensitivity observed through our extensive prompt explorations.
Summary: - The paper proposes a recourse for unanswerable visual questions — rather than simply abstain, can a VLM indicate how to adjust an image so that the question becomes answerable? - The authors introduce the Directional Guidance VQA task and a dataset for the task. - The authors propose a data generation pipeline to produce data that can train models for the directional guidance task. Strengths: - The problem studied by the paper is practical and has the potential to have a real-world impact. - There have been a number of works on VQA models that can abstain, but abstention alone is not ideal for real-world deployments because it would be frustrating to be told that your question was unanswerable. - Providing a remedy for unanswerable visual questions via directional feedback is a useful and novel contribution. - The framing of the task is simple and reasonable. - The data generation pipeline is clever and uses a model-in-the-loop technique to systematically generate training instances. Weaknesses: Since this is somewhat of a resource paper, it would be nice to have multiple vision-language models of different sizes. This is also a bit of a concern given the fact that a model-in-the-loop technique is used to create a training dataset for the task, since different models would ostensibly have different weaknesses. Another weakness is that this is a very narrowly scoped task and dataset that is maybe a better fit for a CV conference. Technical Quality: 4 Clarity: 3 Questions for Authors: - Table 1 is confusing and looks aesthetically unpleasant. - Your table numbers are wrong. Ex: in L295 you refer to Table 6, but this is actually Table 1 and Section 6. Your `\label` is in the wrong place. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your recognition of the contribution and potential impact of our proposed study, and that's exactly what we are targeting. We also expect to facilitate many meaningful real-world applications such as AI assistants for visually impaired individuals. We appreciate the opportunity to discuss the following points. ***W1. Since this is somewhat of a resource paper, it would be nice to have multiple vision-language models of different sizes. This is also a bit of a concern given the fact that a model-in-the-loop technique is used to create a training dataset for the task, since different models would ostensibly have different weaknesses.*** **Response:** To validate our training pipeline, we conducted a proof-of-concept using two mainstream open-source VLMs of different sizes. This initial approach was aimed at demonstrating the feasibility and effectiveness of our method across different model architectures and sizes. We share your concern regarding the model-in-the-loop technique, particularly the potential for the training data to be influenced by the specific weaknesses of the models used. This is indeed similar to unsolved challenges in active learning, where the quality of training data is dependent on the model's performance during data collection. Moving forward, our work will focus on exploring methods to enhance the stability and robustness of our framework. This could include regularization techniques to mitigate biases introduced by model-specific weaknesses and integrating multiple models to diversify the training data. ***W2. Another weakness is that this is a very narrowly scoped task and dataset that is maybe a better fit for a CV conference.*** **Response:** Thank you for your feedback. While our study starts from a specific scenario, it is both important and generalizable. This new task fills the gap in current research regarding the handling of unknown cases and explores the capability of VLMs to acquire additional information when visual information is insufficient. Moreover, our proposed Directional Guidance Framework can be adapted and potentially applied to other tasks/modalities such as understanding query intention and reducing hallucination, where we must deal with insufficient or unclear information. These demonstrate a broader applicability of our work beyond a specific task. We appreciate your acknowledgment of the importance and uniqueness of our study in the strength part. This guidance capability is crucial when deploying vision-language models in real-world applications, not only for specific applications, including assisting visually impaired individuals, but also for advancing the general understanding of multi-modal AI systems. Therefore, we believe the core question of whether VLMs can guide us in acquiring more information is fundamental for broader AI communities. ***Q1:Table 1 is confusing and looks aesthetically unpleasant.*** **Response:** Thank you for your feedback. We will edit the table in the next version to avoid confusion. For example, we will consider splitting the table into two or more parts to make the information clearer and easier to navigate, and more consistent shading will be used to highlight different sections of the table. ***Q2: Your table numbers are wrong. Ex: in L295 you refer to Table 6, but this is actually Table 1 and Section 6. Your `\label` is in the wrong place.*** **Response**: Thank you for pointing out the issue with the table numbers and labels. We have reviewed and corrected the placement of the `\label` command to ensure that the tables are referenced correctly. --- Rebuttal 2: Comment: I have read the author response. I will maintain my rating of a 6. I do not think the paper should be rated lower because it introduces a resource annotated by humans that addresses a real problem. I do not think the paper should be rated higher because the scope is very narrow and there is little technical contribution — it's an application paper (train VLM on dataset with an handcrafted augmentation technique that works only for this application), albeit a useful one. Also, the organization and presentation of the paper could have been better: it was hard to parse conclusions from the body of the results section. --- Rebuttal Comment 2.1: Title: Thank you for the feedback Comment: Thank you for your feedback and for maintaining a positive rating. We appreciate your recognition of the value our resource brings. To improve the clarity and organization of the results section, we will split out the conclusions into a separate section and enhance the overall structure to ensure our findings are more clearly presented in the next revision.
Summary: This paper evaluates and expands the self-reflection capabilities of multiple MLLMs, so that MLLMs can actively evaluate the adequacy of given query information and give suggestions on how to find more reliable information. This paper proposes a hierarchical cognitive process pattern theory and creates a manually annotated dataset for evaluation. Strengths: Methods for active reflection and evaluation of given information are meaningful for enhancing the quality of MLLM responses. Weaknesses: 1. The rationality of the proposed hierarchical cognitive process pattern, which comprises three levels, merits validation. First, are these three levels both necessary and sufficient for the cognitive process? If the issues at these three levels are addressed, can we then resolve the problems associated with the MLLM cognitive process? Second, does the evaluation benchmark and the ability to expand MLLM, as presented in this paper, align with these three levels? 2. The method proposed in this paper is fundamentally similar to the MLLM self-reflection method. Can it be directly implemented through self-reflection without requiring additional training? 3. A comparable method to the one presented in this paper is DualFocus [1], which not only actively evaluates image quality but also performs more sophisticated active grounding. 4. Lastly, the evaluation dataset used in this paper is relatively small, which makes the evaluation results less convincing. [1]Cao Y, Zhang P, Dong X, et al. DualFocus: Integrating Macro and Micro Perspectives in Multi-modal Large Language Models[J]. arXiv preprint arXiv:2402.14767, 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: Evaluate the difference and experimental effect between this method and the large model self-reflection method. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the significance of active reflection and evaluation methods. We would like to highlight that our main contribution is not only improving response quality but also providing guidance to acquire more information. We are glad to discuss the following points: ***W1. The alignment of the proposed hierarchical cognitive process*** **Response:** Thank you for questioning the hierarchical cognitive process pattern we proposed. We appreciate the opportunity to clarify the rationale behind it. The three levels—Response Generation, Awareness of Knowledge Limits, and Knowledge Acquisition Direction—are analogies to study a model's behavior in a human cognitive way. Several studies in the field, as mentioned in our related works, evaluate model intelligence in a similar way. Our work aims to advance models from mere self-knowledge to actively seeking additional information when needed. Understanding the real cognitive process of MLLMs remains an open question. However, we believe this analogical perspective can inspire us to understand and improve MLLM performances. In our proposed task: If a model can identify the information sufficiency and provide a relevant response, it reflects the first level (knowing what's known). If the model can identify when the available visual information is insufficient to answer the question, it demonstrates the second layer (knowing what's unknown). Finally, if the model can suggest a direction to answer the question, it exhibits a sense of knowledge acquisition direction (knowing where to know the unknown). ***W2. Comparison with training free self-reflection approach*** **Response:** Thank you for your feedback. We do not consider our method to share significant similarities with the self-reflection approach. As you mentioned, the self-reflection method can be training-free. In contrast, our main method explores generating synthetic training data to improve the model's performance, which is fundamentally different. However, we agree that exploring self-reflection in our task would be meaningful. Currently, we use zero-shot CoT to explore the model’s baseline performance, and training-free self-reflection can be an additional way to reflect the model’s capability. However, the existing self-reflection/self-critic approaches focus on LLMs in the LLM-agent or tool-learning domains. With MLLMs, self-reflection is still underexplored and still lacks a unified or fixed way. Therefore, we made a complementary experiment based on some relevant self-reflection papers. (See response to Q1). ***W3. Comparison with DualFocus [1]*** **Response:** We would like to emphasize the significant difference between the two works. Specifically, our task is in a scenario where the required information may not be present or only partially present in the image. Instead of focusing on locating answerable cases, our task aims to develop VLMs' ability to guide users to seek additional information to answer questions. This capability is crucial for handling unanswerable cases. We believe that both works address different challenges. We appreciate the reviewer's suggestion and will put DualFocus in our related work. ***W4. The size of evaluation dataset*** **Response:** We appreciate the reviewer's concern on the size of our evaluation dataset. There are several points we would like to clarify in this regard: Real-World Data Collection and Cleaning: Our dataset was collected from real-world scenarios and underwent multiple rounds of rigorous data cleaning to ensure its quality. The nature of the Directional Guidance Task, which involves identifying cases with insufficient visual information, inherently results in a smaller dataset. This is because many real-world VQA datasets contain primarily high-quality, well-framed images, and the occurrence of low-quality or ill-framed cases is relatively rare. Also, to the best of our knowledge, there is no existing benchmark dataset that supports evaluating the Directional Guidance ability. Diverse Guidance Types: Despite the smaller size, we have carefully composed different groups of guidance types to maintain the comprehensiveness of our dataset. This diversity ensures that our dataset is sufficient to evaluate the model's performance across various scenarios relevant to our task. Each example in our dataset is thoughtfully chosen to represent a unique challenge in guiding the acquisition of additional information. We also draw attention to similar work in the field, such as the paper “https://arxiv.org/abs/2404.12390” where the evaluation dataset sizes are of a comparable scale. This precedent underscores that meaningful and rigorous evaluations can still be conducted with datasets of this size. In future work, we plan to expand our dataset further to include more diverse scenarios. However, we believe that our current dataset, despite its size, provides a valid foundation for evaluating our proposed task. ***Q1: Evaluate the difference and experimental effect between this method and the large model self-reflection method.*** **Response:** Continuing from W2, we implemented a self-reflection approach by constantly reassessing the current information and incorporating the image and all conversational contexts. It includes three steps: identifying answerability, finding the key object, and suggesting proper directional guidance. Each step involves local self-reflection to confirm the information, and interact with adjacent steps until reaching an initial answer. Finally, we integrate all conversations and self-reflect the answer with the global context. We tested this self-reflection pipeline with GPT-4o-mini. With a series of carefully designed self-reflection pipeline and prompt templates, the F1, ACC, and ACC(F) are 0.53, 0.53, and 0.23 correspondingly, which is comparable to our reported baseline. As a reference, our best performing model achieves 0.63, 0.63, 0.43. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. Their reply effectively addressed my concerns, so I decide to increase the score from 3 to 5. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for the thoughtful reconsideration of our work. We're glad that our explanations addressed the previous concerns, and we appreciate your decision to adjust the score based on our clarifications.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their time and valuable feedback. We are encouraged that the reviewers recognize our contribution in threefold: - This work identifies a meaningful problem with real-world impact: "Can VLMs guide us in acquiring more information beyond simply abstaining?" (Reviewer *Eiqe* and Reviewer *SnhN*) - We present a novel human-annotated Directional Guidance Benchmark Dataset (Reviewer *SnhN*) - Our paper introduces an effective synthetic data generation and training pipeline to improve the quality of MLLM/VLM responses. (Reviewer *eg4t*, Reviewer *Eiqe*, and Reviewer *SnhN*) --- Additionally, we would like to address some common concerns raised by the reviewers: - **Concern 1**. Compared with more training-free methods (such as more sophisticated CoT and self-reflection prompting) for evaluating the baseline performance. The prompt design is an open question, and we don't rule out that there will always be a better prompt design to get a performance boost. We developed a series of prompting techniques to the best of our knowledge as mentioned in Section 4, and we will include more experiment results (including failure cases) in the revision. Meanwhile, we would like to highlight that the focus of this work is on the design of the task and the (data generation and training) pipeline that significantly improves the VLM's performance on this task. - **Concern 2**. The task is narrowly scoped and limited. While our study is based on a specific task, its implications and applications are wide-ranging. We are glad most reviewers emphasize the importance and necessity of this task (Reviewer *Eiqe* and *SnhN*) and hit one of the pain points of existing VLMs (Reviewer *Eiqe*). We also appreciate Reviewer *Eiqe* for highlighting that our task framing is straightforward yet has a potential real-world impact. Moreover, both the task and our proposed method can be generalized to other scenarios and potentially mitigate the struggles when models face insufficient information. Our contributions have the potential to make a significant impact on both the research community and practical deployments of VLMs, affirming the relevance and importance of our work in the broader AI landscape.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Is Your LiDAR Placement Optimized for 3D Scene Understanding?
Accept (spotlight)
Summary: This paper proposes Place3D, a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations. The framework makes three appealing contributions. 1) To identify the most effective configurations for multi-LiDAR systems, this paper introduces a Surrogate Metric of the Semantic Occupancy Grids (M-SOG) to evaluate LiDAR placement quality. 2) Leveraging the M-SOG metric, this paper propose a novel optimization strategy to refine multi-LiDAR placements. 3) Centered around the theme of multi-condition multi LiDAR perception, the authors collect a 364,000-frame dataset from both clean and adverse conditions. Extensive experiments demonstrate that LiDAR placements optimized using palce3D outperform various baselines. Strengths: - This paper first attempt at investigating the impact of multi-LiDAR placements for 3D semantic scene, which is seldom explored in LiDAR-based detection and segmenation areas. - This paper proposes a more comprehensive metric M-SOG. Experiments show M-SOG is more relevant to the model performance than S-MIG, demonstrating its effectiveness to represent the scene coverage. - Guided by M-SOG, a LiDAR placement optimization methods is proposed in this work which can provide a solid baseline for LiDAR placement optimization. Weaknesses: - The representation may be a little confused. The M-SOG and the optimization method, under my understand, is task-independent. However, there are two optimized LiDAR placement related to Det and Seg tasks, which is a little confused. - To demonstrate the effectiveness of M-SOG, it's recommended to draw a scatter plot of M-SOG and model performance in Tab2 and Tab3, which can be more intuitive. Technical Quality: 3 Clarity: 3 Questions for Authors: - I'm curious about the influence of the LiDAR numbers. It's better to add a experiment to explore the most cost-effective LiDAR number. - M-SOG seems to be a global metric. Can It also reflect the scene coverage of a specific local area? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nan Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer `iXBb`,** Thanks for devoting time to this review and providing valuable comments. --- >Q: The representation may be a little confusing. The M-SOG and the optimization method, under my understanding, are task-independent. However, there are two optimized LiDAR placements related to Det and Seg tasks, which is a little confusing. A: Sorry for the confusion and thanks for the comment. To clarify, the M-SOG and placement optimization strategy are task-independent, but the optimization result depends on the semantic classes involved. For object detection (Det) and semantic segmentation (Seg), the semantic classes differ. Det considers three object categories: Car, Pedestrian, and Bicycle, while Seg includes 21 object and environment classes, such as Building and Fence. Consequently, the placement optimization result varies between Det and Seg due to the different semantic class definitions. >Q: To demonstrate the effectiveness of M-SOG, it's recommended to draw a scatter plot of M-SOG and model performance in Tab 2 and Tab 3, which can be more intuitive. A: Thanks for your suggestion. We include the plot in the attached single-page rebuttal PDF. As illustrated, the results demonstrate a clear correlation, where the performance generally improves for both tasks as the M-SOG increases. While there might be fluctuations in some placements with specific algorithms, the overall relationship follows a linear correlation, highlighting the effectiveness of our M-SOG for computationally efficient sensor optimization purposes. >Q: I am curious about the influence of LiDAR numbers. It is better to add an experiment to explore the most cost-effective LiDAR number. A: Thanks for your suggestion. We appreciate the reviewer's comment highlighting the importance of determining the most cost-effective number of LiDARs. In our study, we investigate the strategic placement of 4 LiDARs, as preliminary experiments indicated that a perception system utilizing 4 LiDARs is more cost-effective compared to other configurations. We test the detection and segmentation performance using 1 to 5 LiDARs, respectively. The results of experiments are as follows: |Method|1x LiDAR|2x LiDAR|3x LiDAR|4x LiDAR|5x LiDAR| |-|:-:|:-:|:-:|:-:|:-:| |BEVFusion-L|35.1|39.4|46.2|52.5|55.3| |Improvement|-|+4.3|+6.8|+6.3|+2.8| We observe that the detection performance of BEVFusion-L improved as the number of LiDARs increased. The most cost-effective LiDAR number can be 4 LiDARs as 5 LiDARs only have slight improvement. We will include more experiments on this point using other detectors in the revision. Additionally, we also supplement the segmentation result of SPVCNN using 1 to 5 LiDARs, respectively. |Method|1x LiDAR|2x LiDAR|3x LiDAR|4x LiDAR|5x LiDAR| |-|:-:|:-:|:-:|:-:|:-:| |SPVCNN|40.3|56.6|62.7|68.6|69.1| |Improvement|-|+16.3|+6.1|+5.9|+0.5| Similar to detection, we observe improved segmentation performance when the use of more LiDARs. Again, using 4 LiDARs tends to achieve the optimal trade-off between the segmentation accuracy and computational cost. To further consolidate this finding, more results on other LiDAR segmentation methods will be supplemented during this revision. >Q: M-SOG seems to be a global metric. Can it also reflect the scene coverage of a specific local area? A: Thanks for your question. Yes, the M-SOG can be adapted to reflect the scene coverage of specific local areas by segmenting the scene into smaller regions. The score of M-SOG is connected to the voxels traversed by LiDAR rays in the scene. In our study, the LiDAR is set to sweep [-24.6, 2.0] vertically and [-180.0, 180.0] horizontally in degree. To compute the M-SOG of local areas, we can reset the sweep range of LiDAR rays according to the specific local areas we are interested in, and then get the local traversed voxels. By performing the same process of global M-SOG to local traversed voxels, we can obtain the M-SOG scores which reflect the scene coverage of specific areas. --- Rebuttal Comment 1.1: Title: Looking Forward to Discussion with You Comment: **Dear Reviewer `iXBb`,** We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. --- In response to your insightful comments, we have made the following revisions to our manuscript: - We have clarified the task-independent nature of M-SOG and the optimization method and explained the variations in optimized placements for different tasks. - We have added experiments to explore the influence of the number of LiDARs on performance, showing the most cost-effective LiDAR numbers. - We have included discussions on the local area coverage using M-SOG, adapting the metric to reflect specific regions. - We have added scatter plots of the M-SOG and model performance, which can be found in the attached single-page Rebuttal PDF file. --- We hope these revisions adequately address your concerns. We look forward to actively participating in the **Author-Reviewer Discussion** session and welcome any additional feedback you might have on the manuscript or the changes we have made. --- Once again, we sincerely thank you for your contributions to improving the quality of this work. Best regards, The Authors of Submission 1401
Summary: The paper targets an overlooked but important aspect of 3D scene understanding -- the influence of LiDAR sensor placements for the success of 3D perception tasks, such as 3D detection and 3D segmentation. The authors introduce Place3D, a comprehensive pipeline for optimizing LiDAR placement in autonomous driving scenarios to enhance 3D scene understanding, especially under some adverse conditions. Indeed, current single-LiDAR systems and datasets tend to fail to capture the complexities of real-world environments. This work proposes to address this by using the Surrogate Metric of the Semantic Occupancy Grids (M-SOG) to evaluate LiDAR placement quality. M-SOG measures the information gain and scene understanding capability by calculating the entropy of semantic voxel distributions. The optimization strategy employs the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to iteratively improve LiDAR configurations. The objective function is the M-SOG score, ensuring that the optimized placements are close to the global optimum. Compared to the existing S-MIG, this work makes a significant contribution by extending the binomial occupancy distribution to multinomial occupancy distribution through multiple semantic labels for better scene understanding. Place3D also introduces a dataset of 364,000 frames collected under various conditions, including severe weather and sensor failures. The optimized placements show superior performance in LiDAR semantic segmentation and 3D object detection tasks compared to existing baselines. Strengths: 1. This work focuses on a less-explored but rather important perspective of 3D scene understanding. The established LiDAR placement evaluation benchmark and the proposed placement optimization method could open up some new and interesting research directions in related research areas. 2. An innovative metric, M-SOG, for evaluating LiDAR placement quality by incorporating semantic information, which improves accuracy over previous methods. Additionally, this work contributes a full-cycle approach from LiDAR placement optimization to data generation and downstream evaluation. 3. The proposed Place3D dataset includes several adverse conditions for realistic and practical evaluation, demonstrating significant improvements in robustness. Additionally, the use of CMA-ES ensures that LiDAR placements are close to the global optimum, with strong theoretical optimality certification. Weaknesses: 1. In the experiments, although the baseline LiDAR placement methods are based on existing configuration of current autonomous vehicle systems (Fig. 7), it is better to compare the proposed method with some randomly sampled LiDAR placements. This can further justify the effectiveness of CMA-ES compared to a wider range of scenarios. 2. It is not clear how the density of the occupancy grid (Sec. 3.3) will influence the surrogate metric, as the estimation of semantic distribution depends on the number of samples. 3. The experiments are conducted in the simulation. Although CARLA is a rather realistic simulator and has been widely adopted in academia, it is unknown how well the proposed method works in the real-world setting. Perhaps it is not feasible to conduct on-site experiments, but more justifications regarding this aspect are needed for further discussion. 4. The optimization process and data generation might require substantial computational resources. Either the main body or the appendix omitted such details. It is highly recommended to supplement such details as they are critical for follow-up works. 5. Can the proposed benchmarks and optimization strategies be extended to other 3D scene understanding tasks, such as 3D object tracking, semantic occupancy prediction, etc? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the above section for detailed questions. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have included some discussions of the potential limitations and societal impact in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer `PfQe`,** Thanks for devoting time to this review and providing valuable comments. --- >Q: In the experiments, although the baseline LiDAR placement methods are based on the existing configuration of current autonomous vehicle systems (Fig 7), it is better to compare the proposed method with some randomly sampled LiDAR placements. This can further justify the effectiveness of CMA-ES compared to a wider range of scenarios. A: Thanks for the insightful comments. We acknowledge that the theoretical upper bound for the performance of the placements optimized by our method cannot be exceeded by random LiDAR placements. Although enumerating all possible LiDAR placements for random selection is computationally infeasible, we add experiments on evaluating the performance of three random placement strategies as follows: |Method|Center|Line|Pyramid|Square|Trapezoid|Line-roll|Pyramid-roll|Random 1|Random 2|Random 3|Ours| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PolarNet|71.0|67.7|67.7|69.3|66.8|67.2|70.9| 65.4 | 64.7 | 65.1 | 76.7 |MinkUNet|65.7|59.7|62.7|60.7|59.0|58.5|62.2| 60.1 | 59.6 | 59.9 | 66.5 |SPVCNN|67.1|59.3|67.6|63.4|61.0|60.6|67.9| 59.4 | 59.1 | 59.6 | 68.3 |Cylinder3D|72.7|68.9|68.4|69.9|68.5|69.8|69.3| 68.0 | 66.7 | 67.9 | 73.0 As can be seen from the above study, randomly sampled placements tend to be sub-par compared to the existing and our optimized strategies. This verifies the effectiveness of the proposed CMA-ES under more general scenarios. >Q: It is not clear how the density of the occupancy grid (Sec 3.3) will influence the surrogate metric, as the estimation of semantic distribution depends on the number of samples. A: Thanks for your question. 1) Density of samples: As the number of samples used to merge and synthesize dense semantic occupancy grids increases, the accuracy of M-SOG metrics is enhanced. This improvement is attributed to the smoother and more continuous semantic representation of the scene resulting from the increased density of semantic LiDAR points. A higher sample density provides a more detailed and granular depiction of the environment, reducing gaps and ambiguities. 2) Density of voxels: As the grids become denser, the M-SOG scores for all placements will slightly increase, and the linear consistency between our metric and perception performance improves. However, with increasing density, linear consistency of M-SOG may converge as the voxel size decreases to very small dimensions. In the experiments, we used voxel sizes matching those in the detection or segmentation training parameters to compute M-SOG, aiming for better linear consistency. For example, for segmentation, we used voxel sizes of [0.10, 0.10, 0.10]. We computed the M-SOG of other voxel sizes for comparison as follows: |Voxel Grid Size|Line-roll|Trapezoid|Line|Pyramid|Square|Pyramid-roll|Center|Ours| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |[0.20, 0.20, 0.20]|-5.64|-6.72|-6.13|-5.88|-4.30|-4.24|-4.52|-4.70| |[0.10, 0.15, 0.20]|-3.80|-3.65|-3.22|-3.29|-2.67|-2.39|-2.47|-2.28| |[0.10, 0.10, 0.10]|-3.13|-2.89|-2.62|-2.56|-2.35|-1.63|-1.58|-1.29| |[0.05, 0.10, 0.10]|-2.97|-2.75|-2.40|-2.33|-2.12|-1.37|-1.24|-1.13| >Q: The experiments are conducted in the simulation. Although CARLA is a rather realistic simulator and has been widely adopted in academia, it is unknown how well the proposed method works in a real-world setting. Perhaps it is not feasible to conduct on-site experiments, but more justifications regarding this aspect are needed for further discussion. A: Thanks for your insightful comments. We appreciate the reviewer's concern regarding the applicability of our proposed method in real-world settings. The main challenge of conducting real-world experiments is the inability to ensure identical scenarios for testing different placements. Given multiple LiDAR placements, we need to collect data multiple times in the same area. Even if we ensure that the ego vehicle follows the exact same route and the data is collected at the same time each day, we cannot guarantee the activities of traffic participants within the scene will be identical. This variability in scenarios can lead to difficulties in making fair comparisons. Real-world experiments are also time-consuming and resource-intensive, which can limit the extent and frequency of testing. We will include more discussion of extensions to real-world settings in the revised version. >Q: The optimization process and data generation might require substantial computational resources. Either the main body or the appendix omitted such details. It is highly recommended to supplement such details as they are critical for follow-up work. A: Thanks for your suggestion. For data generation in CARLA, we recommend using at least 8GB GPUs. For the optimization process, the hardware requirements primarily depend on the resolution of the voxel representation used for the scene. A CPU with a substantial number of cores and at least 64GB of RAM is necessary for efficient optimization through parallel processing. The amount of RAM needed scales with the resolution of the voxel representation. We will include details in the revision. >Q: Can the proposed benchmarks and optimization strategies be extended to other 3D scene understanding tasks, such as 3D object tracking, semantic occupancy prediction, etc? A: Thanks for your question. Yes, the proposed benchmarks and optimization strategies can indeed be extended to other 3D scene understanding tasks, such as 3D object tracking and semantic occupancy prediction, provided we can obtain the semantic prior knowledge from the training dataset. In fact, the M-SOG is task-agnostic as it only evaluates the overall semantic coverage quality provided by the LiDAR placements. The differences in optimization results only arise from the varying semantic settings defined in different tasks. --- Rebuttal Comment 1.1: Title: Looking Forward to Discussion with You Comment: **Dear Reviewer `PfQe`,** We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. --- In response to your insightful comments, we have made the following revisions to our manuscript: - We have included additional experiments comparing the proposed method with randomly sampled LiDAR placements to justify the effectiveness of our method. - We have included details on the computational resources required for the optimization process and data generation. - We have discussed the feasibility of extending our benchmarks and optimization strategies to other 3D scene understanding tasks. - We have supplemented details on the influence of the density of the occupancy grid on the surrogate metric. --- We hope these revisions adequately address your concerns. We look forward to actively participating in the **Author-Reviewer Discussion** session and welcome any additional feedback you might have on the manuscript or the changes we have made. --- Once again, we sincerely thank you for your contributions to improving the quality of this work. Best regards, The Authors of Submission 1401 --- Rebuttal Comment 1.2: Comment: Thanks to the authors for the comprehensive responses to my review comments. The major concerns from my comments have been resolved. I have also gone through the author’s responses to other three reviewers’ comments. I am confident that this work is of a good value in the area of research and could open up future research opportunities. Therefore, I decide to change the rating from “Weak Accept” to “Accept”.
Summary: The authors present a very interesting study of optimal lidar placement on autonomous vehicles. They introduce a novel measure of the quality of lidar placement called M-SOG, and also discuss an optimization approach to find the best placement. They measure the impact of the placement on the important vision tasks in autonomous vehicles, such as object detection and semantic segmentation, showing that the placement metric is aligned with the improved quality. Lastly, in addition to clear weather they also analyze the impact on data when the conditions are degraded (such as fog or faulty sensors). Moreover, they also provide the data and the code that supports their work. Strengths: - Very interesting and timely work. - The new metric and the results could be of interest to the self-driving community, and the methods should not be difficult to implement. - The presented results are quite promising. Weaknesses: - The paper writing is can be improved significantly, as the method is not actually clearly explained. - The authors tried to stuff too many things in a single paper (metric and degraded sensors), which degraded focus and impact of the work. - The experiments can be improved significantly. (detailed comments on the above weaknesses can be found in the section below) Technical Quality: 2 Clarity: 2 Questions for Authors: - Line 107, "voxelized grips" -> "voxels"? - Line 110, "where each voxel is assumed to be independently and identically distributed", what does this actually mean? The voxels are clearly not independent of each other. - The notation can be improved, as it is confusing at times and not properly introduced. E.g., line 112, what is T, 1. - Along similar lines, H is not properly introduced a few lines below. - Line 143, should P_SOG also have k-index in its name? As it is defined for a particular k, as per the given equation. - In the paragraph starting with line 150, the authors discuss the metric. However, given that the metric is computed using only a subset of voxels, is there an invalid setup that the metric would not capture properly, such as putting lidars below the car or something similar? - Fig 2, would be good to zoom in on the roof of the car to make it more visible, as that is the relevant part in this figure. - The optimization in Section 3.3 is not well explained, and the notation is also not properly introduced. - E.g., the authors do not properly explain what is the lidar configuration space, or what are the constraints. Some details can be gleaned here and there, but given how critical this part is for their work the authors should do a much better job explaining it. - Along these lines, line 173 mentions cov-mat of distribution, distribution of what? This is very poorly explained. - Also, in this paragraph the authors use k-index for samples, but k was already used for classes? - How significant and how practical is the theoretical analysis? The authors introduce C_M as a gap, but this constant is not discuss anywhere, or how large it is. This makes the theoretical discussion quite weak. More discussion and more clarifications are needed. - Line 238, Tables 2 and 3 don't actually show this. - The detectors used in these experiments are not clearly discussed. - In general, it seems that the authors tried to stuff too many things at the same time in the work and in the experiments. This made them relegate many results and discussions to the appendix, even those that are critical to the work. It seems that it would be much better to focus on one aspect first (e.g., metric + impact on vision results in clear weather) and only then expand from there. - What are the points in Fig 4, not clear and not discussed. - The adverse aspect is very important, yet the authors do not spend a lot of time/space on it, and the results are just given. This is unfortunate, and weakens their argument around this part of their work. - Line 253, this was already mentioned in line 238? - Line 263, why is the placement good for adverse weather? Any deeper insights? How generalizable are these placements? All these important questions are not discussed at all, and the authors just skip them. This is another symptom of stuffing too many things in one paper and losing focus and depth. - Line 270, the authors just move all important ablation results to the appendix, which is not acceptable. At this point the authors are just abusing the appendix. Please note that the appendix is not meant to increase the page limit, and critical experiments/results/discussions should NOT be moved there. - More discussions and insights about the found placement should be added, such as around 2D or roll solution. Any advice for the designers, any interesting findings? This is not provided at all. ################### FEEDBACK AFTER THE REBUTTAL ################### I would like to thank the authors for responding to my comments, and also to the other reviewers who helped me further understand the paper. However, after reading through all the other responses/discussion it seems that my main concerns were not addressed. E.g.,while I said that the paper is too stuffed and poorly explained, the authors just said that they will add more explanations and that's that. First, I'm not sure where would this be added since there is no space and the paper is overstretched as is, and second, the required changes are significant and just saying that it will be addressed is not enough, the paper would need to be re-reviewed. Moreover, the authors marked my comments under "Other Minor Modifications", which just shows that they did not appreciate the severity of my comments. I do like the idea and I do think that the work has future, but with the current execution I just can not see how can I increase my recommendation. The paper needs quite a lot of attention and rewriting, so much so that I am not confident that the authors would not be able to do that with quick updates, especially given their responses that made light of some of my more important comments. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors do not discuss the limitations in a separate section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer `DpmT`,** Thanks for devoting time to this review and providing valuable comments. --- >Q: Writing can be improved. A: Thanks for your comment. Due to limited space for response, for minor modifications (e.g. notations), please refer to the above Author Rebuttal section. For specific questions, please refer to the following. >Q: The voxels are not independent of each other. A: Sorry for the confusion. The voxels in a 3D space are not independent of each other due to spatial correlations in a single frame. We intended to convey that the probability distribution of occupancy for each voxel can be modeled independently across a large number of frames. >Q: Is there an invalid setup where the metric would not capture properly? A: Thanks for asking. In our optimization, we consider LiDAR placement within a limited hyper-rectangle space on the vehicle roof. This ensures practical and relevant placements, avoiding physically or operationally infeasible configurations. >Q: The optimization in Sec 3.3 is not well explained. A: In Sec 3.3, we introduce an optimization approach using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to find optimal LiDAR placements. This method iteratively searches and refines LiDAR configurations in hyper-rectangle solution space to maximize the M-SOG based objective function. We will include more in the revision. >Q: How significant and practical is theoretical analysis? A: Thanks for the question. The theoretical analysis gives a guarantee for the optimality of the local optima from heuristic CMA-ES under the assumption of the Lipschitz constant $k_G$ and the distance $C_M$ between the maximal and minimal of objective function G. As we have shown on Line 201-204, Lipschitz constant $k_G$ and the difference between maximum and minimum of G $C_M$ over S can be approximated through calculating $G(u^k_i )$ of Algorithm 1 for each sampled $u^k_i$ over the δ density Grids subset. More specifically, in each iteration $k$, $k_G \sim = \max_i (G(u^k_{i+1}) - G(u^k_i))/(u^k_{i+1} - u^k_i)$, $C_M \sim= \max_i G(u^k_i) - \min_i G(u^k_i)$. As a special case, if input space $U$ is bounded as hyper-rectangle $U$ (as in our experiments), the calculation $C_M$ can be omitted as shown in Corollary 1. We humbly argue that the theoretical analysis is significant in giving theoretical insight into the optimization of NN performance from semantic transformation input (e.g. LiDAR placement in Euclidean space), which may inspire robustness analysis from other semantic transformation perturbations, e.g., rotation, translation, brightness change, etc. >Q: Line 238, Tabs 2 and 3 don't show the correlation between M-SOG and performance. A: Thanks for your comment. Tabs 2 & 3 show the performance of different LiDAR placements, and Tab 1 shows the M-SOG of LiDAR placements. We add a plot of M-SOG and performance in the rebuttal PDF to illustrate it better. >Q: The detectors used in these experiments are not clearly discussed. A: Thanks for your comment. We used 4 detectors including PointPillars, BEV method CenterPoint, and BEVFusion-L, spatio-temporal representation method FSTR, as discussed in Appendix B.3. We will include more discussion in the revision. >Q: The authors tried to stuff too many things in one paper. A: Thanks for your comment. We intend to provide a comprehensive view of our contributions, including the introduction of M-SOG metric, the optimization of LiDAR placements, and the evaluation of placements under both clean and adverse weather conditions. Covering all these aspects is essential to fully demonstrate the value and impact of our method. >Q: What are the points in Fig 4? A: Thanks for asking. Fig 4 illustrates comparisons to existing sensor placement methods, as discussed in Sec 4.2 (Lines 244 to 251). We compare S-MIG [34] and our M-SOG in terms of the linear consistency between metrics and performance, with M-SOG doing better. Greater linear consistency indicates that metrics can effectively predict perception performance, enhancing downstream optimization tasks. >Q: The authors do not spend a lot of time/space on the adverse aspect. A: Thanks for the comment. In Sec 4.1, we detailed the types of adverse conditions considered, including severe weather, external disturbances, and sensor failures. In Sec 4.2, we validated the performance of optimized LiDAR placements under adverse conditions, showing consistent outperformance compared to baselines. In Sec 4.3, we investigated how our method informs LiDAR placements to improve robustness against corruptions. As suggested, we will further explain and analyze the adverse aspects during revision. >Q: Why is the placement good for adverse weather? A: Thanks for asking. The M-SOG utilizes semantic occupancy as prior knowledge, which is invariant to changing conditions, thereby enabling robustness of optimized placement in adverse weather. We also design an ablation study to explore the interplay between the optimization method and perception performance under corruptions in Sec 4.3. >Q: How generalizable are these placements? A: To clarify, our method is data-driven, and the optimized LiDAR placement aligns with the deployment scenarios. This is common industry practice that deploy specific vehicle configurations for specific country or region. However, our method itself is generalizable for any scenario. >Q: More discussions and insights about the found placement. Any interesting findings? A: Our experiments revealed several findings: - LiDAR heights: Increasing the average height of the 4 LiDARs improves performance, likely due to an expanded field of view. - Variation in heights: A greater difference in LiDAR heights enhances perception, as varied height distributions capture richer features from the sides of objects. - Uniform distribution: For Seg, the Pyramid placement performs better, as a more spread-out and uniform distribution of 4 LiDARs captures continuous surface. --- Rebuttal Comment 1.1: Title: Looking Forward to Discussion with You Comment: **Dear Reviewer `DpmT`,** We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. --- In response to your insightful comments, we have made the following revisions to our manuscript: - We have improved the explanation of the optimization approach using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and added more details to the theoretical analysis. - we have clarified our experiments for comparison between our M-SOG and existing placement method S-MIG in Fig 4 and provided several findings and insight into LiDAR placement in our experiments. - We have improved the clarity of our figures, such as zooming in on the relevant parts in Fig 2. To better explain Tabs 2 and 3, we have added a scatter plot of M-SOG and model performance, as shown in the attached single-page Rebuttal PDF. - We have clarified notations and improved the overall writing to make the method more understandable. For typos, We corrected "voxelized grid" to "voxels". For notations, we properly introduced notations $T$ and $H$, included the $k$ index in $P_{SOG}$, and revised the $k$ index in optimization to avoid overlap. We also did a thorough check to ensure all notations were properly introduced and clearly explained. --- Additionally, we provide a list of **Summary of Notations** as follows: |Notation|Explanation| |-|-| |$L$|Length of the Region of Interest (ROI) |$W$|Width of the Region of Interest (ROI) |$H$|Height of the Region of Interest (ROI) |$\delta_L$|Resolution of voxelization in the length dimension |$\delta_W$|Resolution of voxelization in the width dimension |$\delta_H$|Resolution of voxelization in the height dimension |$S$|Set of voxels in the Region of Interest (ROI) |$N$|Total number of voxelized grids |$K$|Total number of semantic labels |$v_i$|Voxel $i$ |$p(v_i)$|Probability of voxel $v_i$ being occupied |$y_t$|Frame $t$ |$T$|Total number of frames |$H(v_i)$|Entropy of voxel $v_i$ |$H(v_{i} &#x7c; L)$|Conditional entropy of voxel $v_i$ given LiDAR placement $L$ |$H_{POG}$|Total entropy of POG (Probabilistic Occupancy Grids) |$p_{POG}$|Joint probability of all non-zero voxels in the ROI |$p_{POG &#x7c; L}$|Conditional probability of POG given LiDAR placement $L$ |$H_{POG &#x7c; L}$|Conditional entropy of POG given LiDAR placement $L$ |$\Delta H$|Information gain of 3D scene understanding |$S_{SOG}$|Semantic occupancy grid |$s_{y_k}^{(t)}$|The set of voxels occupied by semantic label $y_k$ at frame $t$ |$p(v_i = y_k)$|Probability of voxel $v_i$ being occupied by semantic label $y_k$ |$\mathcal{P}_{SOG}$|Conditional joint distribution of SOG given LiDAR placement $L$ |$\mathcal{H}_{SOG}$|Entropy of SOG |$L_j$|LiDAR configuration $j$ |$\mathcal{H}^{L=L_j}_{SOG}$|Conditional entropy distribution of P-SOG over the voxel set $S &#x7c; L_j$ |$\mathbf{M}_{SOG}(L_j)$|Normalized surrogate metric of SOG |$\mathbb{E}$|Expectation operator |$\mathbf{u}_j$|LiDAR configuration $j$ in objective function $F(\mathbf{u}_j)$ |$F(\mathbf{u}_j)$|Objective function for LiDAR configuration $j$ |$P(\mathbf{u})$|Physical constraint for LiDAR configuration |$G(\mathbf{u})$|Function representing the surrogate metric |$U$|Set of all potential LiDAR configurations |$\mathbf{u}^*$|Optimal LiDAR configuration |$\mathbf{m}^{(k)}$|Mean vector of the distribution at iteration $k$ |$\sigma^{(k)}$|Step size of the distribution at iteration $k$ |$\mathbf{C}^{(k)}$|Covariance matrix of the distribution at iteration $k$ |$\mathcal{N}(\mathbf{m}^{(k)}, (\sigma^{(k)})^2\mathbf{C}^{(k)})$|Normal distribution for sampling ($k$ will be updated to $g$ to avoid overlap with semantic class) |$\delta$|Density for discretizing the configuration space $U$ |$\mathbf{u}_i^{(k)}$|Sampled candidate configuration at iteration $k$ |$M_k$|Number of best solutions used to update $m^{(k+1)}$ |$w_i$|Weights based on solution fitness |$\mathbf{p}_\mathbf{C}^{(k+1)}$|Evolution path accumulating information about the direction of successful steps |$c_\mathbf{C}$|Learning rate for updating the covariance matrix |$p_{\sigma}$|Evolution path for step size adaptation |$c_{\sigma}$|Learning rate for updating the evolution path $p_{\sigma}$ |$d_{\sigma}$|Normalization factor for step size adaptation |$E\|N(0, I)\|$|Expected length of a standard normally distributed random vector |$\| \cdot \|$|Euclidean norm --- We hope these revisions adequately address your concerns. We look forward to actively participating in the **Author-Reviewer Discussion** session and welcome any additional feedback you might have on the manuscript or the changes we have made. --- Once again, we sincerely thank you for your contributions to improving the quality of this work. Best regards, The Authors of Submission 1401
Summary: This paper proposes a framework called Place3D to investigate the placement of multiple LiDAR sensors for semantic segmentation and object detection task under various weather conditions. Place3D mainly introduce a Surrogate Metric of Semantic Occupancy Grids (M-SOG) to evaluate the perception performance with different sensors placements. Besides, an optimization approach is applied to refine the LiDAR placement. The method is evaluated on the simulated dataset with CARLA, showing the advantages of the refined placements. Strengths: - The motivation is interesting and the paper is well-written. - A new metric and a dataset are proposed for the evaluation of the placement. - The experiments are extensive, three typical models are evaluated on 7 weather scenarios. - Comparison with 7 popular placement methods show the advantages of the refined placement with the proposed methods. Weaknesses: - Although the proposed method investigates LiDAR placement for more tasks under more weathers, the core formulation is similar to that in [34] as cited in the paper. - Only one type of 16-beam LiDAR sensor is investigated. Actually, sensors with more laser beams (64 or even 128) are becoming low-cost and popular. It is important to evaluate the effect of the placement as the point cloud data get dense. - During the companions in Tab.2,3, different refined placements are applied for different tasks. How is the performance if applied a Seg-optimized placement to the Det task? It is impractical to change the placement for different task. Is that possible to find an optimal placement for both tasks? Technical Quality: 3 Clarity: 3 Questions for Authors: - In the dataset comparison part, it is written that no adverse weather for Nuscenes and Waymo. They may include rainy weather and even more (e.g., Snow). Although in the appendix part, please also have a check on that. - How is the solution spaces for the LiDAR placement? Are the displacements along z-axis or even rotation are considered during the optimization? - Is there any intuitive interpretation about why refined sensor placement are beneficial to different tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the possible limitation in data collection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer `8eRR`,** Thanks for devoting time to this review and providing valuable comments. --- >Q: Although the proposed method investigates LiDAR placement for more tasks under more weather, the core formulation is similar to that in [34] as cited in the paper. A: Thank you for the insightful comment. We acknowledge the reviewer's observation regarding the similarities between our core formulation and [34]. However, our method M-SOG addresses critical limitations present in S-MIG [34] and makes significant improvements as follows: - While S-MIG primarily focuses on object binomial occupancy, our M-SOG incorporates semantic differentiation to form a multinomial occupancy distribution, providing a richer and more detailed understanding of the scene. This allows our metric to better capture the true nature of the environment. - S-MIG constructs occupancy grids with 3D bounding boxes, overlooking the occlusion relationships between objects and the environment. Our M-SOG addresses this critical limitation by leveraging semantic distribution, which only incorporates the reachable surface of objects and environments for LiDAR rays. - We compare S-MIG and our M-SOG in terms of the linear consistency between metrics and perception performance (Lines 244 to 251) and illustrate our superiority in Fig 4. Greater linear consistency enables the surrogate metric to better predict perception performance. Thus M-SOG enhances applicability to downstream optimization tasks. > Q: Only one type of 16-beam LiDAR sensor is investigated. Sensors with more laser beams (64 or even 128) are becoming low-cost and popular. It is important to evaluate the effect of placement as the point cloud data gets dense. A: Thanks for your insightful comment. We appreciate the reviewer's comment on the using denser beam LiDARs. In our work, we specifically chose to utilize 16-beam LiDARs to perform experiments, totaling 64 beams with 4 LiDARs, based on the following goals: - Using low-resolution LiDARs increases the challenge in both object detection and semantic segmentation, making it easier to observe the impact of LiDAR placements on perception. - LiDAR point clouds become sparse at long distances and the use of multiple LiDARs is specifically to enhance the perception of sparse point clouds. To this end, we use low-resolution LiDARs to simulate the conditions of sparse point clouds from high-resolution LiDARs at long distances. This highlights the role of multiple LiDARs in improving the perception of sparse point clouds, especially for distant objects. We added experiments on 64-beam LiDARs and found that placement remains important with higher-resolution LiDARs. The results are as follows. |Beam|Method|Center|Line|Pyramid|Trapezoid| |-|-|:-:|:-:|:-:|:-:| |16|BEVFusion-L|52.5|49.3|51.0|50.2 |64|BEVFusion-L|77.1|80.4|79.3|77.5 > Q: During the companions in Tabs 2 and 3, different refined placements are applied for different tasks. How is the performance if applied a Seg-optimized placement to the Det task? It is impractical to change the placement for different tasks. Is it possible to find an optimal placement for both tasks? A: Thanks for asking. We show the performance of applying Seg-optimized placement to Det tasks and list the performance of Det-optimized layouts as a comparison. While the performance is close, the Seg-optimized placement performance still lags slightly behind the Det-optimized placement performance. To find optimal placement for both tasks, a possible solution is to define a weighted objective function for optimizing, such as $F(L_j) = M_{SOG|Seg}(L_j) + \lambda M_{SOG|Det}(L_j)$, where $\lambda$ is the weight which should be designed carefully. |Method|Center|Line|Pyramid|Square|Trapezoid|Line-roll|Pyramid-roll|Seg-optimized|Det-optimized| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |BEVFusion-L|52.5|49.3|51.0|49.2|50.2|50.8|50.7|51.8|53.0 |CenterPoint|55.8|54.0|55.9|54.0|56.4|55.2|56.2|56.4|57.1 >Q: In the dataset comparison part, it is written that there is no adverse weather for nuScenes and Waymo. They may include rainy weather and even more (e.g., Snow). Although in the appendix part, please also have a check on that. A: Sorry for the oversight and thanks for your comment. We will address this in the revision. While nuScenes and Waymo include adverse weather, our dataset provides categorized weather data under identical scenarios. This improvement allows for systematic study and fair evaluation of the degradation of perception models in adverse conditions, compared to the naturally distributed weather data in nuScenes and Waymo. >Q: How are the solution spaces for the LiDAR placement? Are the displacements along the Z-axis or even rotation considered during the optimization? A: Thanks for asking. In our work, Z-axis and rotation are exactly considered in the optimization. The solution space for LiDAR placement in our optimization framework is defined as a 3D cuboid space above the vehicle roof. This space includes both the location coordinates (X, Y, Z) and roll angles for each LiDAR unit. Specific placements and a detailed explanation are discussed in Appendix B.1. >Q: Is there any intuitive interpretation of why refined sensor placement is beneficial to different tasks? A: Thanks for your question. There are several intuitive reasons why refined sensor placement is beneficial. - LiDAR heights: Increasing the average height of the 4 LiDARs improves performance, likely due to an expanded field of view. - Variation in heights: A greater difference in LiDAR heights enhances perception, as varied height distributions capture richer features from the sides of objects. - Uniform distribution: The pyramid placement performs better in segmentation, as a more spread-out and uniform distribution of 4 LiDARs captures richer surface features. As discussed, our optimized placements are anticipated to align with these findings in the experiments, leading to better performance. --- Rebuttal 2: Title: Looking Forward to Discussion with You Comment: **Dear Reviewer `8eRR`,** We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. --- In response to your insightful comments, we have made the following revisions to our manuscript: - We have clarified the core formulation and highlighted the improvements over the work cited in [34], specifically addressing critical limitations in S-MIG. - We have expanded our experiments to include higher-resolution LiDARs, demonstrating that placement remains important even with denser point clouds. - We have provided intuitive explanations of how refined sensor placements enhance performance in various tasks, such as improving the field of view and capturing more detailed features. - We have verified the presence of adverse weather conditions in the Waymo and nuScenes datasets and detailed the key differences between our dataset and others in terms of adverse weather scenarios. --- We hope these revisions adequately address your concerns. We look forward to actively participating in the **Author-Reviewer Discussion** session and welcome any additional feedback you might have on the manuscript or the changes we have made. --- Once again, we sincerely thank you for your contributions to improving the quality of this work. Best regards, The Authors of Submission 1401
Rebuttal 1: Rebuttal: **Dear Reviewers, Area Chairs, and Program Chairs,** We sincerely thank the reviewers, ACs, and PCs for the time and efforts devoted during this review. We especially appreciate our reviewers for offering valuable comments, providing positive feedback, and drawing insightful suggestions. --- We are encouraged that our reviewers recognize this work: - **Reviewer `8eRR`:** - *"has interesting motivation and extensive experiments"*, *"proposes a new metric and a dataset"*, and *"the paper is well-written"*. - **Reviewer `DpmT`:** - *"is very interesting and timely"*, *"could be of interest to the self-driving community"*, and *"results are quite promising"*. - **Reviewer `PfQe`:** - *"focuses on a less-explored but rather important task"*, "*could open up new and interesting research directions in related research areas"*, *"contributes a full-cycle approach from LiDAR placement optimization to data generation and downstream evaluation"*, and "*has strong theoretical optimality certification"*. - **Reviewer `iXBb`:** - *"presents the first attempt at investigating the impact of multi-LiDAR placements for the 3D semantic scene"*, *"demonstrates effectiveness to represent the scene coverage"*, and "*provides a solid baseline for LiDAR placement optimization"*. --- As suggested by our reviewers, we have revised the manuscript accordingly. We present a **summary of changes** as follows: - **Methods & Technical Details:** - As suggested by **Reviewer `8eRR`**, we have clarified the core formulation and highlighted the improvements over the work cited in [34], specifically addressing critical limitations in S-MIG. - As suggested by **Reviewer `DpmT`**, we have improved the explanation of the optimization approach using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and added more details to the theoretical analysis. - As suggested by **Reviewer `PfQe`**, we have included additional experiments comparing the proposed method with randomly sampled LiDAR placements to justify the effectiveness of our method. - As suggested by **Reviewer `iXBb`**, we have clarified the task-independent nature of M-SOG and the optimization method and explained the variations in optimized placements for different tasks. - **Experiments:** - As suggested by **Reviewer `8eRR`**, we have expanded our experiments to include higher-resolution LiDARs, demonstrating that placement remains important even with denser point clouds. - As suggested by **Reviewer `DpmT`**, we have clarified our experiments for comparison between our M-SOG and existing placement method S-MIG in Fig 4 and provided several findings and insight of LiDAR placement in our experiments.. - As suggested by **Reviewer `PfQe`**, we have included details on the computational resources required for the optimization process and data generation. - As suggested by **Reviewer `iXBb`**, we have added experiments to explore the influence of the number of LiDARs on performance, showing the most cost-effective LiDAR numbers. - **Elaboration:** - As suggested by **Reviewer `8eRR`**, we have intuitive interpretations of why refined sensor placements are beneficial to different tasks, such as increasing the field of view and capturing richer features. - As suggested by **Reviewer `DpmT`**, we have improved the clarity of our figures, such as zooming in on the relevant parts in Fig 2. To better explain Tabs 2 and 3, we have added a scatter plot of M-SOG and model performance, as shown in Rebuttal PDF. - As suggested by **Reviewer `PfQe`**, we have discussed the feasibility of extending our benchmarks and optimization strategies to other 3D scene understanding tasks. - As suggested by **Reviewer `iXBb`**, we have included discussions on the local area coverage using M-SOG, adapting the metric to reflect specific regions. - **Other Minor Modifications:** - As suggested by **Reviewer `8eRR`**, we have checked and verified the presence of adverse weather conditions in Waymo and nuScenes, and included the major difference between our dataset and other datasets in the aspect of adverse weather in our revision. - As suggested by **Reviewer `DpmT`**, we have clarified notations and improved the overall writing to make the method more understandable. For typos, We correct "voxelized grid" to "voxels". For notations, we properly introduce notation $T$ and $H$, include the $k$ index in $P_{SOG}$, and revise the $k$ index in optimization to avoid overlap. We also do a thorough check to ensure all notations are properly introduced and clearly explained. - As suggested by **Reviewer `PfQe`**, we have supplemented details on the influence of the density of the occupancy grid on the surrogate metric. - As suggested by **Reviewer `iXBb`**, we have added scatter plots of the M-SOG and model performance, which can be found in the attached single-page Rebuttal PDF file. For detailed responses regarding each of the above aspects, please kindly refer to the response **rebuttal windows** in the review section. --- Last but not least, we sincerely thank our reviewers, ACs, and PCs again for the valuable time and efforts devoted and the constructive suggestions provided during this review. --- Yours sincerely, The Authors of Submission 1401 Pdf: /pdf/417012bf39219840ad9f17577899d95e8f6a3c83.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Zero-Shot Reinforcement Learning from Low Quality Data
Accept (poster)
Summary: The work investigates methods for zero-shot reinforcement learning (RL) that can be trained on small, homogeneous datasets. This research is driven by the need to make zero-shot RL practical when large, heterogeneous datasets are unavailable. The authors identify the limitations of existing methods that overestimate out-of-distribution (OOD) state-action values when trained on low-quality datasets. They propose incorporating conservatism into zero-shot RL algorithms to mitigate these issues. Their experimental results demonstrate that conservative zero-shot RL methods outperform their non-conservative counterparts on low-quality datasets while maintaining competitive performance on high-quality datasets. Strengths: * Addressing Significant Issue: The paper attempts to address a significant gap in zero-shot RL by focusing on the challenges of using small, homogeneous datasets, which are more common in real-world applications. * Improved Performance: The proposed conservative zero-shot RL methods consistently outperform non-conservative counterparts on low-quality datasets and do not degrade performance on high-quality datasets. Weaknesses: 1. **The problem definition**: I do not think the settings of this work is so-called Zero-shot RL since the work introduces another dataset. And the code link is invalid. 2. **Complexity of Implementation**: The introduction of conservatism increases the complexity of the algorithms, which may pose implementation challenges for practitioners. 3. **Limited Real-World Validation**: While the experiments are thorough, there is limited validation of the proposed methods in real-world settings, which could affect their applicability. 4. **Comparative Analysis**: The paper could benefit from a more in-depth comparative analysis with other state-of-the-art methods to highlight specific strengths and weaknesses. Technical Quality: 2 Clarity: 2 Questions for Authors: See in weakness. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See in weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer ZxeR. See our responses to your comments below. We hope we can work with you to address any misunderstandings. Thank you in advance for your cooperation. **W1: The problem definition.** **I do not think the settings of this work is so-called Zero-shot RL since the work introduces another dataset.** We believe there is a critical misunderstanding in this question which we hope to clarify. We are very much performing zero-shot RL in this work. Zero-shot RL is about using a dataset of transitions collected from an environment to pre-train an agent such that it can return a performant policy for any downstream task in that environment. In our work, we are only changing the quality of this pre-training dataset. We are not introducing new datasets at test-time, and we are not fine-tuning agents on new data. We lay out our problem setting in lines 49-55, and this exactly matches the original zero-shot RL problem as proposed in the canonical work [1]. **W2: Complexity of Implementation. The introduction of conservatism increases the complexity of the algorithms, which may pose implementation challenges for practitioners.** We disagree that our proposals would pose implementation challenges for practitioners. In lines 147-148 we stated: *“We emphasise these additions represent only a small increase in the number of lines required to implement existing methods.”* We provide code snippets in Appendix G where the reviewer can see that are proposals constitute a few 10s of extra lines of code on top of the vanilla FB implementation. **W3: Limited Real-World Validation.** **While the experiments are thorough, there is limited validation of the proposed methods in real-world settings, which could affect their applicability.** Whilst we agree that evaluating methods on simulated benchmarks raises questions about the real-world applicability of any method, we specifically chose D4RL as one of our testbeds because it is designed to mirror the difficulties/practicalities of real-world deployment. The D4RL paper describes the benchmark as being *“guided by key properties of datasets relevant to real-world applications of offline RL. (cf. abstract)”* And that *“the choice of tasks and datasets [are] motivated by properties reflected in real-world applications, such as narrow data distributions and undirected, logged behavior. (cf. section 7)”*. Because of these characteristics, D4RL has become the standard benchmark for evaluating (as closely as is feasible) the real-world applicability of offline RL algorithms, and our work is continuation of that tradition. **W4: Comparative Analysis.** **The paper could benefit from a more in-depth comparative analysis with other state-of-the-art methods to highlight specific strengths and weaknesses.** As discussed in section 4.2, we use both existing state of the art zero-shot RL methods (successor features with laplacian eigenfunctions and FB representations) as baselines. We also compare with the performance of TD3, the best performing single-task RL method on the ExORL benchmark, and CQL, the single-task equivalent of our method. To the best of our knowledge there are no other zero-shot RL methods for us to compare against. Our comparisons span 5 baseline algorithms, 11 datasets, 6 domains, 19 tasks, and 3/5 seeds for a total of 1086 individual runs. We believe Section 4 (Results) provides evidence of the strengths of our methods w.r.t to the existing state-of-the-art. We explore the weaknesses of our methods thoroughly with additional experiments in Section 5 (Limitations). **Code** Finally, we’re not sure what happened with our code link, it seems to have expired unexpectedly since we submitted this work. We’ve shared a new link with the AC (as per the guidelines) that they should share with you so you can review the code. **References** [1] Ahmed Touati, Jérémy Rapin, and Yann Ollivier. Does zero-shot reinforcement learning exist? ICLR 2023
Summary: This work addresses zero-shot reinforcement learning (RL), focusing on training agents to perform tasks without explicit rewards during pre-training. The authors investigate the performance degradation that occurs with small, homogeneous datasets and propose conservative zero-shot RL algorithms inspired by single-task offline RL approaches. Experimental results demonstrate that the proposed methods improve performance on sub-optimal datasets without compromising effectiveness on large, diverse datasets. Strengths: * Zero-shot pretraining for RL is a crucial problem that can enhance generalization in downstream tasks. * The proposed method is well-motivated, and the writing is clear. * The experiments are thorough and effectively validate the proposed method. Weaknesses: * One weakness of the paper is the lack of a clear definition for "low-quality data." As shown in Figure 8, it might be more accurately described as "coverage" rather than quality. Providing a more explicit definition is recommended. * Additionally, the comparison of baselines is incomplete, as it lacks an offline goal-conditioned baseline, which is highly relevant in the considered tasks. Including a representative offline GCRL baseline, such as GC-IQL, would strengthen the comparison. * Some related works are missing, particularly in the offline GCRL domain. References [1][2][3][4] are pertinent as they also focus on learning from offline datasets without rewards and deploying for multiple goals. The didactic example in this paper is very similar to Figure 1 in a prior paper [3], which also convey similar information during zero-shot deployment for multiple goals. A discussion with the prior results would be beneficial. Furthermore, a very relevant offline zero-shot RL method [5] is neither cited nor discussed. *References* [1] Yang R, Lu Y, Li W, et al. Rethinking goal-conditioned supervised learning and its connection to offline rl. ICLR, 2022. [2] Park S, Ghosh D, Eysenbach B, et al. Hiql: Offline goal-conditioned rl with latent states as actions. Advances in Neural Information Processing Systems, 2024. [3] Yang R, Yong L, Ma X, et al. What is essential for unseen goal generalization of offline goal-conditioned rl?[C]//International Conference on Machine Learning. PMLR, 2023. [4] Eysenbach B, Zhang T, Levine S, et al. Contrastive learning as goal-conditioned reinforcement learning[J]. Advances in Neural Information Processing Systems, 2022. [5] Frans K, Park S, Abbeel P, et al. Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings[J]. arXiv preprint arXiv:2402.17135, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: * Could the authors provide a more explicit definition of "low-quality data"? * It is recommended to compare the proposed method with a representative offline GCRL baseline. * Including related works discussed above would make this study more comprehensive. * Can this method also be applied to approaches based on successor features? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: One limitation is that VC-FB and MC-FB do not outperform CQL on the D4RL benchmark; addressing this issue is left for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer ZC95. Thanks very much for engaging with our paper and for the positive comments. See our response to your questions below. **Q1: Could the authors provide a more explicit definition of "low-quality data"?** To the best of our knowledge, there isn’t one metric that formalises the quality and/or coverage of an offline RL dataset. We think of datasets as having two independent characteristics: 1) diversity—how well the state-action space is covered, and 2) size—the absolute number of samples in the dataset. It’s possible to have high diversity and small size (RND-100k) or low diversity and large size (Random-Full). We used “quality” as an umbrella term for both size and diversity, though we agree we could have used “coverage”. If the reviewer feels strongly about this we could replace “low quality” with “low coverage” in the paper. **Q2-3: It is recommended to compare the proposed method with a representative offline GCRL baseline, and include related GCRL literature.** Zero-shot RL requires methods to generalise to both goal-reaching reward functions *and* “dense” reward functions (i.e. those that define locomotion tasks like Walker-Walk, Walker-Run etc.). We didn’t initially compare our methods to an offline GCRL baseline because such methods are, in principle, only capable of generalising to goal-reaching tasks. Indeed, it is not immediately clear how we would even define a goal for locomotion-type tasks. GCRL methods are therefore limited in a way zero-shot RL methods are not, so we didn’t think they represented a valid baseline. That said, we're keen to provide empirical evidence to support this explanation, and have run additional experiments to that end. We evaluated GC-IQL on our 4 ExORL domains when trained on the RND-100k buffer. In lieu of a well-defined goal state for the locomotion tasks, we used the state in $D_{\text{labelled}}$ with highest reward. We hypothesised that GC-IQL would perform similarly to our proposed methods on the goal-reaching tasks, and less well on the locomotion tasks. We also hypothesised that, because GC-IQL employs conservatism like VC-FB and MC-FB, it would handle the low-quality datasets better than vanilla FB and outperform it in aggregate, despite the naive goal definition for locomotion tasks. We report the aggregated performance of GC-IQL averaged across 5 seeds, compared with zero-shot RL methods based on FB in the table below. | Domain | Task | Task Type | FB | GC-IQL | VC-FB | MC-FB | | --- | --- | --- | --- | --- | --- | --- | | Walker | All Tasks | Locomotion | 266 (233–283) | 218 (164 - 251) | **396 (381–407)** | 252 (188–288) | | Point-mass Maze | All Tasks | Goal-reaching | 102 (0–181) | **332 (190-436)** | 323 (177–412) | 270 (154–459) | | Quadruped | All Tasks | Locomotion | 93 (69–137) | 85 (61-129) | **168 (104–201)** | 104 (38–212) | | Jaco | All Tasks | Goal-reaching | 4 (1–6) | 15 (6-25) | 7 (3–12) | **17 (7–26)** | | All Domains | All Tasks | - | 97 | 170 | **245** | 178 | We find that GC-IQL performs similarly to the VC-FB and MC-FB on the goal-reaching tasks as predicted. On the locomotion tasks it performs worse than all zero-shot methods (irrespective of whether they are conservative or not). This is presumably because the state with highest reward in $D_{\text{labelled}}$ is a poor proxy for the true, dense reward function. Thank you for this feedback, it has made us realise that we should clarify the difference between GCRL and zero-shot RL in the paper. We will add these results (and associated results for DIAYN and Random datasets), an explanation of the differences between GCRL and zero-shot RL, and the associated literature you cite in the revised manuscript. **Q4: Can this method also be applied to approaches based on successor features?** Yes, our value-conservative proposals are fully compatible with successor features. We mention this in line 99, and provide a derivation of value conservative successor features in Appendix D. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. While most issues have been resolved, I think the first question remains unaddressed. The authors claim that "quality" is composed of "diversity" and "data size," which I disagree with. Does this imply that data with low diversity but large size is considered high quality? Additionally, "coverage" or "diversity" has been studied in previous works such as [1][2], whereas "quality" is often referred to as the average return of the trajectories. I suggest the authors reconsider their definition or terminology to ensure rigorousness of a good work. [1] Schweighofer K, Radler A, Dinu M C, et al. A Dataset Perspective on Offline Reinforcement Learning[J]. arXiv preprint arXiv:2111.04714, 2021. [2] Yarats D, Brandfonbrener D, Liu H, et al. Don't change the algorithm, change the data: Exploratory data for offline reinforcement learning[J]. arXiv preprint arXiv:2201.13425, 2022. --- Rebuttal 2: Title: Response by authors Comment: Thanks for pointing us toward this work. We currently cite [2], and use their proxy for dataset quality (downstream performance on certain tasks) to rank our datasets. However, we were unaware of [1], and the metrics they propose are helpful, although not directly applicable to our setting as we’ll explain, but we’re keen to try to use these ideas. In [1] they propose metrics for measuring the *exploitation (TQ)* and *exploration (SACo)* of the behaviour policy that creates the dataset. *TQ* is estimated as the mean reward of trajectories in the dataset w.r.t. a single downstream task of interest. Since we’re interested in generalising to any downstream task, in principle we’d need to calculate TQ for an infinite set of reward functions as specified by our task sampling distribution $\mathcal{Z}$ which is clearly intractable. As an alternative, we could calculate TQ w.r.t our test reward functions only, but our datasets are made up of state-action-next-state transition samples, not of full trajectories, so we can’t do this either. Consequently, TQ doesn’t quite fit our problem setting. However, we can measure *SACo;* the ratio of unique state-action pairs contained in a dataset w.r.t. some reference dataset. We used their codebase to do so w.r.t. an idealised dataset of 100k unique state-action transitions. See the metrics below, note we had to vary the number of discretisation bins to to get meaningful results across environments with different sizes of state and action spaces. | | Walker (5 bins) | Point-mass Maze (25 bins) | Quadruped (3 bins) | Jaco (4 bins) | | --- | --- | --- | --- | --- | | RND-100k | 0.993 | 0.557 | 0.657 | 0.981 | | DIAYN-100k | 0.432 | 0.395 | 0.587 | 0.983 | | Random-100k | 0.477 | 0.036 | 0.526 | 0.801 | | | | | | | Though this is a step in the right direction it is still not a complete picture. For that, we would need a TQ-like metric that is applicable to our setting. This has raised an interesting discussion which we’ll add to the paper to highlight the limitations of these existing approaches. We will add this table to Appendix B (Datasets), cite this work, update text in the main body to mention this method and to discuss its limitations.
Summary: This paper identifies an overestimation problem of zero-shot RL algorithms, particularly in low data or low data quality settings. To address this, they propose to use a value conservative method to mitigate the overestimation (from CQL). They showcase that this effectively allows for reducing the overestimation, then empirically show performance improvements on low or bad quality data regimes, without impacting performance on usual regimes. Strengths: I am not an expert of zero short RL, but the paper seems sound and is mostly clearly written. The problem seems relevant as zero short RL from low quality data is relevant for robotic applications, as underlined by the authors. I like the first figure, that provides intuition, similar to the ones in DDQN's paper. The method is clearly motivated and explained, the experimental evaluation is clear, the figure and caption allow assessing the defined precise scientific questions. I do not see much flaw in this paper presentation and method, but my qualification is not too high. Weaknesses: **Provide a bit more background and intuition about zero-shot RL**. This is not really mandatory, but it would help a broader audience. **Introduce the RANDOM and RND datasets**. Maybe an introduction of these datasets can help better grasp Figure 2 and 3, and would help the previous point. Technical Quality: 3 Clarity: 4 Questions for Authors: * What is the intuition behind the overestimation reduction of CQL ? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: There is a dedicated limitation section that correctly addresses the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer 5D6P. Thanks very much for engaging with our paper and for the positive comments. See our response to your questions below. **Q1: Provide a bit more background and intuition about zero-shot RL**. Thanks for the pointer. We’ll add the following text to help intuition after the formal introduction of zero-shot RL in lines 49-55. *“Intuitively, the zero-shot RL problem asks: is it possible to train an agent using a pre-collected dataset of transitions from an environment such that, at test time, it can return the optimal policy for *any* *task* in that environment without any further planning or learning?”* **Q2:** **Introduce the RANDOM and RND datasets (to help Figure 2)**. We introduce the datasets in lines 177-186 in Section 4.1 (after Figure 2), and provide more detail in Appendix A.2, including a visualisation of the state coverage on Point-mass Maze in Figure 8. We point to Appendix A.2 at the end of Figure 2’s caption in light of not being able to introduce the datasets earlier. We feel that, together, these provide good context on the datasets used in our study. **Q3: What is the intuition behind the overestimation reduction of CQL?** We provide (brief) intuition of CQL’s regularising effect in Lines 120-121, VC-FB’s regularising effect in Lines 124-125, and MC-FB’s regularising effect in Lines 131-134. We’ll summarise these again below for convenience. In offline RL $Q$ functions tend to over-value state-action pairs that aren’t in the dataset. Intuitively, CQL amends the $Q$ function’s loss such that the value of these state-action pairs are downweighted, whilst upweighting the value of state-action pairs that are in the dataset (Equation 10). VC-FB does the same thing, but for all possible downstream tasks (Equation 11), rather than just one task as with CQL. MC-FB is slightly different in that the regulariser operates on expected future visitation counts (measures $M$), rather than expected values ($Q$ functions). The MC-FB loss function downweights the *probability* of reaching some future state $s_+$ from state-action pairs not in the dataset, whilst upweighting the probability of reaching some future state $s_+$ from a state-action pairs that are in the dataset. --- Rebuttal Comment 1.1: Title: Thank you very much for your clarifications Comment: Thank you very much for your clarifications. I hope that they will help improve your manuscript, and make it more accessible to a broader audience.
Summary: This work proposes modifications to Forward-Backward representations method. The modifications are aimed to improve the method's robustness to dataset quality. The vanilla FB suffers when the dataset does not cover the state space well, and overestimates the quality of actions. The authors show this with a simple example on point-mass. Two modifications are proposed, VC-FB and MC-FB, which take inspiration from CQL and make sure the FB representations are not too optimistic on the transitions not seen in the dataset. VC-FB and MC-FB are then thoroughly tested on ExORL and D4RL. The authors show that VC-FB offers the best improvement on ExORL, although none of the proposed modifications beat CQL on D4RL benchmark. Nevertheless, in all settings the modifications greatly improve over vanilla FB. Strengths: - The paper is clearly written. - The proposed modifications have clear mathematical motivation, similar to CQL - Both the modifications improve over the vanilla FB representation, and even outperform CQL on ExORL - The experiments are thorough and clear Weaknesses: - The novelty is limited: the authors take CQL's method for correcting overly optimal predictions and apply them to FB. Technical Quality: 3 Clarity: 3 Questions for Authors: - How do the proposed modifications affect the performance of the model when the dataset is of good quality? Does it hurt the performance? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately described limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Hi Reviewer qeSX. Thanks very much for engaging with our paper and for the positive comments. See our response to your question below. **Q1: How do the proposed modifications affect the performance of the model when the dataset is of good quality? Does it hurt the performance?** We asked ourselves this question in Q3, Section 4, and responded to it on Lines 221-238, and with Figure 6, Table 1 and Table 8 (Appendix C). We’ll summarise our findings here for convenience. We compared the performance of our methods with vanilla FB on the full RND, DIAYN and Random datasets and found our methods showed superior performance on all of them (Table 1). We then ran a second experiment where we trained our methods and FB on different sizes (100k, 500k, 1m, 10m) of RND (the highest quality dataset). Again, we found our method outperformed vanilla FB on each of these, with the discrepancy in performance decreasing as we approached the fully sized dataset (Figure 6). As a result, we’re confident that conservative zero-shot RL methods do not trade better performance on low quality datasets for worse performance on the full, high-quality datasets. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My question regarding "good quality" dataset was rather about what would happen if the dataset you're pre-training on actually contains trajectories that solve the downstream task. In section 4.1 you say that you use offline TD3 performance on a given dataset as a proxy for dataset quality. My question is basically what will happen if you push the quality (as measured by TD3) even higher than you have so far. For example, you could take online TD3 replay buffer. Since there's not much time for you to collect the data and train on it, I'm just asking for you intuition about this. --- Reply to Comment 1.1.1: Title: Response from authors Comment: Thanks for clarifying your question. We hint at the setting you describe when we train on the medium-expert dataset in our D4RL experiments (a dataset containing a mix of ~optimal and suboptimal trajectories for the downstream task). We report those results in Table 10 Appendix C. We found VC-FB and MC-FB were performant in this setting, unlike FB which fails catastrophically, but didn’t match the performance of CQL. Our intuition is that the success of our methods in this setting depends closely on the choice of $\tau$ which reflects how “conservative” our methods are w.r.t. OOD state-actions. If $\tau$ is too high, the methods are not conservative w.r.t. OOD state-actions and we get catastrophic failures akin to FB. If $\tau$ is too low the methods are too conservative about OOD state-actions and the policy struggles if it deviates from the ~optimal trajectories described by the dataset. Similar findings are discussed in the original CQL paper in Appendix F. Of course, this is far from perfect, and we’d like to look at more robust methods in future work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans
Accept (poster)
Summary: This paper identifies a pioneering privacy threat in Federated Learning (FL) introduced by popular diffusion models, called DataStealing. The researchers show that an attacker can use Combinatorial Triggers (ComboTs) to steal private data from a client in vanilla FL, even with strict data management protocols in place. For advanced FL systems, they propose an attack method (AdaSCP) to bypass distance-based defenses by adaptively scaling critical parameter updates with indicators. Extensive experiments demonstrate that AdaSCP not only circumvents existing defenses but also successfully steal thousands of images from the system. The paper calls for increased attention to privacy issues in diffusion models within the FL community. Strengths: 1. They propose a new benchmark (DataStealing) to demonstrate the security vulnerability of training federated diffusion models, which can be utilized to circumvent the strict data protection measures of some organizations. They show that multiple Trojans (ComboTs) are effective in leaking thousands of data. 2. The proposed AdaSCP is the first to update the critical parameters and utilize indicators to optimize the scale value according to the theoretical target value. The authors provide a simple proof for calculating the optimal scaling value with the returned indicator updates. 3. The paper offers valuable insights into the critical privacy and security concerns associated with federated diffusion models. It proposes a crucial security issue and an attack method for the FL community. 4. Well-written paper with clear motivation, technical approach, and clear figures. Extensive experiments support their claim well. Weaknesses: 1. The reason that Model Poisoning cannot bypass the defenses is only demonstrated in Section 3.3. More quantitative analysis, including details of defending against malicious updates with different attack methods, can enhance understanding. A figure is also acceptable. 2. The error bars are not reported. Although it will require significant computational resources, I recommend conducting repeat experiments with different seeds, at least for the proposed AdaSCP. This will enhance the academic quality of this paper. 3. The recent studies [1, 2, 3] should be discussed in the related work. 4. Some minor typos: - “..”->”.” in line 200. - “front” should be “first” to keep consistent in Algorithm 2 line 6. [1] Jia J, Yuan Z, Sahabandu D, et al. FedGame: a game-theoretic defense against backdoor attacks in federated learning[J]. Advances in Neural Information Processing Systems, 2024, 36. [2] Cheng S, Tao G, Liu Y, et al. Lotus: Evasive and resilient backdoor attacks through sub-partitioning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 24798-24809. [3] Huang J, Hong C, Chen L Y, et al. Gradient Inversion of Federated Diffusion Models[J]. arXiv preprint arXiv:2405.20380, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness part. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our contributions. We address your concerns below: **Response to Q1:** We appreciate your suggestion to include more quantitative analysis on defending against malicious updates. To enhance understanding, we analyze the distance of Krum and Multi-Krum across different training rounds. Compared to other defense algorithms, the variations in distance are more distinct. The results are presented below: Table C: Ratio of Malicious Update Distance to Mean Benign Model Distance (AdaSCP in CIFAR10). | Defenses | 1 | 10 |20|50|100|200|300| |:----------|:-------|:-------|:-------|:-------|:-------|:-------|:-------| | Krum | 6.6556 | 1.4777 | 0.9072 | 0.9715 | 0.9378 | 0.9435 | 0.9514 | | Multi-Krum | 6.6557 | 1.2245 | 0.9834 | 0.9829 | 0.9578 | 0.9503 | 0.9507 | Table C shows that the initial scale value is substantially distant from the optimal value, resulting in the initial distance of the malicious updates being approximately 6.7 times greater than that of the benign updates. As AdaSCP progressively optimizes the scale value, the distance of the malicious updates more closely aligns with that of the benign updates, gradually approaching the optimal distance that enables the attack to bypass the defenses. More details can be found in Figure B on the PDF file. In the revised version, we will include Figure B to enhance the readers' understanding of our method. **Response to Q2:** Thank you for your suggestion. We conduct repeating experiments with additional two different seeds, resulting in three distinct Non-IID data distributions. We present our results in the form of “mean ± standard” deviation. The results are listed below: Table D: Repeating Experiment of AdaSCP in CIFAR10 with Non-IID Distribution. | Defenses | FID | MSE | |:----------|:-------|:-------| | FedAvg | 10.41±1.79|0.0117±0.0013 | | Krum | 21.53±6.58 |0.0683±0.0184| | Multi-Krum | 8.23±0.32 | 0.1267±0.0008 | | Foolsgold | 16.57±5.54 | 0.0246±0.0211 | | RFA | 8.71±0.42 | 0.1165±0.0144 | | Multi-metrics | 11.45±2.55 | 0.0299±0.0040 | | Mean | 12.82±2.64 | 0.0629±0.0036 | Table E: Repeating Experiment of AdaSCP in CelebA with Non-IID Distribution. | Defenses | FID | MSE | |:----------|:-------|:-------| | FedAvg | 6.94±0.23 | 0.0080±0.0012 | | Krum | 11.90±1.35 | 0.0574±0.0277 | | Multi-Krum | 4.74±0.22 | 0.1411±0.0133 | | Foolsgold | 8.55±0.91 | 0.0166±0.0099 | | RFA | 7.46±0.89 | 0.1256±0.0212 | | Multi-metrics | 7.47±0.18 | 0.0174±0.0060 | | Mean | 7.84±0.25 | 0.0610±0.0107 | The Mean values in Tables D and E show minimal deviation from those reported in our paper, demonstrating the robustness of our method to different data distributions. We will include the error bars in the revised manuscript. **Response to Q3:** Thank you for your suggestion. We have included more related work in Appendix E. These studies will also be discussed in our revised version. **Response to Q4:** Thank you for your careful review. We will address all typos in the revised manuscript.
Summary: The paper introduces a novel attack method named DataStealing, which exploits vulnerabilities in diffusion models trained in Federated Learning (FL) systems. The authors highlight that diffusion models, despite their advanced capabilities in data generation, present new privacy threats when integrated with FL. They propose a method called Combinatorial Triggers (ComboTs) to steal images by embedding multiple backdoors in the diffusion models. To counteract distance-based FL defenses, they introduce an Adaptive Scale Critical Parameters (AdaSCP) attack that evaluates the importance of parameters and adaptively scales malicious updates to evade detection. Extensive experiments validate the effectiveness of AdaSCP in stealing private data from diffusion models. Strengths: Strengths 1. Novelty and Relevance: The paper proposes a new task, which performs data stealing from diffusion models trained in FL. Due to the popularity of diffusion models and FL systems, it points out an emerging threat in this critical area. 2. Clarity: The paper is well-organized and easy to follow. It considers scenarios where defenses are available and proposes an adaptive method to evade the defenses. 3. Experiments: The authors conduct experiments on two image datasets, comparing their methods against various state-of-the-art attacks and defenses, providing an empirical foundation for their claims. Weaknesses: Weaknesses: 1. Defense Mechanisms: A discussion on potential defense mechanisms or mitigations would make the work have better impacts in this area and further highlight the significance of the threat. 2. Assumptions: In the experiments, the proportion of malicious clients is 1/5, which might not be realistic in real practice. 3. More Experiments: The attack is evaluated on two datasets (CIFAR10 and CelebA). Expanding the evaluation to include more diverse datasets could further support the work. Technical Quality: 3 Clarity: 3 Questions for Authors: NA Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations and potential negative societal impact of the work in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and high appreciation of the innovation and experiments in our work. We address your concerns below: **Response to W1:** Thank you for your suggestion. Here is our discussion on the defense mechanisms: - According to Table 1 and Fig.3 in the main paper, Multi-Krum shows the best defense performance, followed by RFA. Under the Multi-Krum defense, no attack strategy can rapidly compromise the global model to achieve MSE below 0.1 within the designated round. This is mainly because Multi-Krum is effective at detecting malicious updates. When the attacker uses a small scale to bypass Multi-Krum, the defense mechanism can still dilute the malicious update by weighted averaging the remaining updates. Although the experiment in Appendix A.1 shows that the defense is not reliable with longer training, Multi-Krum could be a good start for future work. - Additionally, the differential privacy algorithm can render indicators invalid while sacrificing generative performance, as discussed in Appendix F (Lines 666-671). This is mainly because noise or norm clipping treats all parameters uniformly. Since AdaSCP requires specific indicators with a large magnification factor, locating candidate indicators and filtering outliers after comparison with other updates would be more efficient and effective. This method is preferable to comparing the distance of all parameters, which tends to average out outliers and makes it difficult to filter out malicious clients. - Moreover, the experiment in Appendix A.2 shows that our backdoors diminish after 100 rounds of continued training with clean data, suggesting a potential mitigation strategy for future work. Lastly, since the triggers are implanted in the input noise, releasing only the generated results without exposing the model parameters is another viable defense. We will add a new section to discuss the potential defense mechanisms and mitigations of *DataStealing* backdoors. **Response to W2:** We have conducted experiments with more clients, such as 1/8 and 1/10, as detailed in Appendix A.6. These results show that AdaSCP remains effective in executing attacks under more clients. For experiments with more attackers, we tested with two malicious clients, each having 500 target images. The results for 2/5 are shown below: Table B: *DataStealing* with two attackers applying AdaSCP on CIFAR10 under Non-IID distribution. | Defenses | FID | $\text{MSE}_1$ | $\text{MSE}_2$ | $\text{MSE}_{mean}$ | |:----------|:-------|:-------|:-------|:-------| | Multi-Krum |14.97|0.0886|0.0898|0.0892| |Foolsgold|33.24|0.0174|0.0229|0.0201| | Multi-metrics |15.61|0.0324|0.0608|0.0466| Table B shows that AdaSCP can be applied in scenarios with multiple attackers. However, the attackers may interfere with each other, leading to varying attacking performances. Additionally, having two attackers causes the FID to rise, mainly because they disrupt normal training with scaled updates. As a result, the magnitude of the weighted average updates is double that of a single attacker. This issue could be mitigated by dividing the updates by the number of attackers. We leave this for future research. **Response to W3:** Thank you for your suggestion. We conducted an additional experiment with the LSUN bedroom dataset, which has a higher resolution of 256x256. More details can be found in our global response and the PDF file.
Summary: The paper titled "DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans" investigates a novel privacy vulnerability in federated learning (FL) when training diffusion models. The authors introduce a new attack methodology named DataStealing, which leverages multiple Trojans to exfiltrate private data from local clients. The attack utilizes Combinatorial Triggers (ComboTs) to map extensive data and proposes the Adaptive Scale Critical Parameters (AdaSCP) attack to bypass advanced FL defenses. **Key Contributions:** * 1\. **Identification of Vulnerability**: - The paper identifies a new privacy threat in FL where diffusion models can leak substantial private data. Despite stringent privacy measures, attackers can exploit these models to steal high-quality local data. * 2\. **Combinatorial Triggers (ComboTs)**: - A method to select multiple triggers for backdoor attacks, significantly increasing the capability to map and steal large amounts of private data. * 3\. **Adaptive Scale Critical Parameters (AdaSCP)**: - An attack strategy that evaluates and scales critical parameter updates, making the malicious updates indistinguishable from benign ones. This circumvents distance-based defenses effectively. * 4\. **Experimental Validation**: - Extensive experiments demonstrate the ability of the proposed methods to leak thousands of images from training diffusion models in an FL setup. AdaSCP is shown to be highly effective against advanced FL defenses. **Conclusion**: The paper highlights the severe privacy risks associated with training diffusion models in FL and calls for more robust defensive measures to protect against such vulnerabilities. The findings emphasize the need for continuous advancements in FL security to safeguard private data. Strengths: **Strengths of the Paper** **Originality** - **New Vulnerability Identification**: The paper identifies a previously unrecognized vulnerability in federated learning (FL) when using diffusion models, termed as DataStealing. This identification highlights a novel attack surface in the intersection of generative models and FL. - **Innovative Attack Methodologies**: The introduction of Combinatorial Triggers (ComboTs) and the Adaptive Scale Critical Parameters (AdaSCP) attack represents significant originality. ComboTs utilize multiple triggers to enhance the attack’s capability, while AdaSCP circumvents advanced defenses by adaptively scaling critical parameter updates. - **Creative Combination of Techniques**: The paper creatively combines the concepts of Trojans in generative models with adaptive scaling techniques to formulate an effective attack strategy against FL systems. **Quality** - **Rigorous Experimental Validation**: The paper presents extensive experiments to validate the effectiveness of the proposed attack methodologies. The results are comprehensive, demonstrating the ability of AdaSCP to defeat various advanced FL defenses. - **Detailed Methodological Explanation**: The methodologies are described in detail, with clear explanations of the underlying principles and the implementation of the attack strategies. This thorough presentation ensures that the proposed methods are reproducible and verifiable. - **Robustness of Results**: The experimental results are robust, showing consistent performance across different datasets (CIFAR10 and CelebA) and FL settings (IID and non-IID distributions). **Clarity** - **Well-Organized Structure**: The paper is well-organized, guiding the reader logically from the identification of the problem to the proposed solutions and experimental validation. - **Clear Writing Style**: The writing is clear and concise, making complex technical content accessible. Each section is well-articulated, with smooth transitions that maintain the reader’s engagement. - **Effective Use of Visual Aids**: Figures and tables are used effectively to illustrate key points and results. These visual aids enhance the reader’s understanding of the methodologies and the significance of the findings. **Significance** - **High Relevance to Privacy and Security**: The paper addresses a critical issue in the domain of FL, highlighting significant privacy risks associated with training diffusion models. The identified vulnerabilities and proposed solutions are highly relevant to the ongoing discussions about data privacy and security in machine learning. - **Impact on Future Research**: The findings of this paper are likely to stimulate further research in the field, prompting the development of more robust defense mechanisms in FL. By uncovering new attack strategies, the paper sets the stage for advancements in both attack and defense techniques. - **Broad Applicability**: The proposed methodologies and findings are not limited to a specific application but are broadly applicable to various domains where FL and generative models are used. This broad applicability enhances the overall significance of the paper. **Conclusion** The paper "DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans" excels in originality, quality, clarity, and significance. It introduces novel vulnerabilities and attack methodologies in FL, backed by rigorous experimental validation and clear presentation. The contributions of this paper are highly relevant to the research community and hold significant potential for future advancements in privacy and security in machine learning. Weaknesses: **Weaknesses of the Paper** **Limited Discussion on Defense Mechanisms** - **Lack of Proactive Defense Strategies**: While the paper thoroughly explores the vulnerabilities and proposes robust attack methodologies, it falls short in discussing proactive defense mechanisms. It would significantly benefit from suggesting or exploring potential defense strategies to mitigate the identified vulnerabilities. Including a section on how to strengthen FL systems against such attacks would provide a more balanced view. **Depth of Contextualization** - **Comparative Analysis with Prior Work**: Although the paper positions its contributions within the broader context of federated learning and privacy security, a deeper comparative analysis with specific prior studies would enhance its impact. For example, more detailed comparisons with existing backdoor and Trojan attack methodologies would help in highlighting the advancements made by this work. - **Exploration of Related Work**: The related work section could be expanded to include a more comprehensive review of similar vulnerabilities in federated learning and generative models. This would help in better contextualizing the novelty and significance of the proposed methods. **Experimental Scope** - **Scalability and Generalizability**: While the experiments are robust, they are limited to specific datasets (CIFAR10 and CelebA) and certain configurations (IID and non-IID distributions). Exploring the scalability and generalizability of the proposed attack methodologies to other datasets and real-world scenarios would strengthen the empirical validation. For instance, including more diverse datasets or different FL frameworks could provide insights into the broader applicability of the findings. - **Extended Analysis of Parameters**: The paper briefly touches upon the impact of various parameters, such as the proportion of critical parameters and patch sizes in ComboTs. A more detailed and systematic analysis of how these parameters influence the attack's effectiveness across different settings would be beneficial. Including ablation studies on a wider range of parameters and configurations would offer a deeper understanding of the methods' robustness. **Presentation Improvements** - **Simplifying Technical Explanations**: Some sections of the paper, particularly those explaining the technical details of the methodologies, could be simplified for better readability. This would make the paper more accessible to a broader audience, including those who may not have a deep technical background in the specific area. - **Enhanced Flow and Readability**: Improving the flow and readability of certain sections, particularly the methodology and experiment results, would enhance the overall presentation. Ensuring smoother transitions between sections and sub-sections would maintain the reader’s engagement and comprehension. **Specific Suggestions for Improvement** * 1\. **Incorporate Proactive Defense Mechanisms**: Add a section discussing potential defense strategies to mitigate the vulnerabilities identified. This could include theoretical approaches, proposed defense mechanisms, or an exploration of existing techniques that could be adapted. * 2\. **Expand Comparative Analysis**: Deepen the comparative analysis with prior work, providing detailed discussions on how the proposed methods advance the state-of-the-art in backdoor and Trojan attacks in FL. * 3\. **Broaden Experimental Validation**: Expand the experimental scope to include more diverse datasets and configurations. Conduct a systematic analysis of various parameters influencing the attack's effectiveness, with detailed ablation studies. * 4\. **Simplify and Enhance Readability**: Simplify technical explanations where possible, and improve the overall flow and readability of the paper. This could involve rephrasing complex sections and ensuring smooth transitions between different parts of the paper. By addressing these weaknesses, the paper can further solidify its contributions and provide a more comprehensive and impactful addition to the research area. Technical Quality: 3 Clarity: 3 Questions for Authors: **Questions and Suggestions for the Authors** **Questions** * 1\. **Defense Mechanisms**: - What proactive defense strategies do you suggest to mitigate the vulnerabilities identified in the paper? Have you considered evaluating existing defense mechanisms or proposing new ones specifically tailored to counter the DataStealing attack? * 2\. **Comparative Analysis**: - Can you provide a more detailed comparative analysis with specific prior studies on backdoor and Trojan attacks in federated learning? How do your proposed methods significantly advance the state-of-the-art compared to these existing approaches? * 3\. **Scalability and Generalizability**: - How do you anticipate your proposed attack methodologies (ComboTs and AdaSCP) will perform on more diverse datasets and in real-world scenarios? Have you considered evaluating your methods on other datasets or federated learning frameworks to validate their scalability and generalizability? * 4\. **Parameter Sensitivity**: - Can you provide a more systematic analysis of how different parameters (e.g., proportion of critical parameters, patch sizes in ComboTs) influence the effectiveness of the attacks? Detailed ablation studies on these parameters would be beneficial for understanding the robustness of your methods. * 5\. **Complexity and Computation**: - What are the computational costs and complexities associated with implementing your proposed methods (ComboTs and AdaSCP)? How feasible is it to execute these attacks in practical settings, considering the required computational resources and time? **Suggestions** * 1\. **Proactive Defense Strategies**: - Include a dedicated section discussing potential defense strategies to mitigate the vulnerabilities identified. This could involve theoretical approaches, proposed defense mechanisms, or an exploration of existing techniques that could be adapted to counter DataStealing. * 2\. **Enhanced Comparative Analysis**: - Deepen the comparative analysis with prior work, providing detailed discussions on how your proposed methods advance the state-of-the-art in backdoor and Trojan attacks in federated learning. This can help in better contextualizing the novelty and significance of your contributions. * 3\. **Broaden Experimental Validation**: - Expand the experimental scope to include more diverse datasets and configurations. Conduct a systematic analysis of various parameters influencing the attack's effectiveness, with detailed ablation studies. This would strengthen the empirical validation and demonstrate the broader applicability of your findings. * 4\. **Simplify Technical Explanations**: - Simplify the technical explanations where possible to make the paper more accessible to a broader audience, including those without a deep technical background in the specific area. This would enhance the overall readability and impact of the paper. * 5\. **Detailed Discussion on Limitations**: - Include a more detailed discussion on the limitations of your proposed methods. Address potential challenges and constraints in implementing the attacks, and suggest future research directions to overcome these limitations. By addressing these questions and suggestions, the authors can provide a more comprehensive and impactful contribution to the research area, enhancing the clarity, robustness, and significance of their work. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: **Assessment of Limitations and Potential Negative Societal Impact** **Addressed Limitations** * 1\. **Technical Limitations**: - The authors briefly mention the limitations related to the computational complexity and scalability of their proposed methods. They acknowledge the challenges in training diffusion models with ComboTs and the sensitivity of these models to gradient updates. * 2\. **Experimental Constraints**: - The paper indicates that the experiments were conducted on specific datasets (CIFAR10 and CelebA) and acknowledges the need for further validation on more diverse datasets and real-world scenarios. * 3\. **Defensive Measures**: - There is a mention of the need for further research to develop robust defensive mechanisms against the identified vulnerabilities in federated learning, suggesting an awareness of the current gap in defensive strategies. **Potential Negative Societal Impact** * 1\. **Privacy and Security Risks**: - The paper highlights significant privacy risks associated with federated learning, especially when training diffusion models. The authors acknowledge the potential misuse of the proposed attack methodologies to steal private data, underscoring the importance of addressing these vulnerabilities to protect user privacy. **Constructive Suggestions for Improvement** * 1\. **Detailed Discussion on Limitations**: - The authors should provide a more comprehensive discussion on the limitations of their proposed methods. This could include: - A deeper exploration of the computational costs and complexities associated with implementing ComboTs and AdaSCP. - An assessment of the scalability and generalizability of the proposed methods across different datasets and federated learning frameworks. - Potential challenges in deploying these attacks in real-world scenarios, considering practical constraints and resource limitations. * 2\. **Proactive Defense Strategies**: - Include suggestions or preliminary evaluations of potential defense mechanisms to counter the DataStealing attack. This would provide a more balanced view and contribute to advancing the field by not only identifying vulnerabilities but also proposing ways to mitigate them. * 3\. **Ethical Considerations and Responsible Disclosure**: - The authors should discuss the ethical considerations and responsible disclosure practices followed in conducting this research. This could include: - Steps taken to ensure that the research does not cause harm. - Collaboration with organizations or communities to address the identified vulnerabilities responsibly. - Suggestions for policy or regulatory measures to protect against the misuse of such attack methodologies. * 4\. **Broader Societal Impact**: - Expand on the broader societal impact of the research by discussing: - How the findings could influence the development of privacy-preserving technologies in federated learning. - The implications for industries and applications relying on federated learning, such as healthcare, finance, and IoT. - Potential benefits of raising awareness about these vulnerabilities to foster more robust and secure machine learning practices. **Conclusion** The authors have made a good start in addressing the limitations and potential negative societal impacts of their work. By providing a more detailed discussion on these aspects and incorporating the constructive suggestions above, the authors can enhance the comprehensiveness and responsibility of their research. This approach would contribute positively to the field and ensure that the research is conducted and presented in an ethically sound and impactful manner. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and appreciation of the novelty and experiments of our work. We address your concerns below: **Response to "Discussion on Defense Mechanisms"**: Thank you for your suggestion. Here is our discussion on the defense mechanisms: - According to Table 1 and Fig.3 in the main paper, Multi-Krum shows the best defense performance, followed by RFA. Under the Multi-Krum defense, no attack strategy can rapidly compromise the global model to achieve MSE below 0.1 within the designated round. This is mainly because Multi-Krum is effective at detecting malicious updates. When the attacker uses a small scale to bypass Multi-Krum, the defense mechanism can still dilute the malicious update by weighted averaging the remaining updates. Although the experiment in Appendix A.1 shows that the defense is not reliable with longer training, Multi-Krum could be a good start for future work. - Additionally, the differential privacy algorithm can render indicators invalid while sacrificing generative performance, as discussed in Appendix F (Lines 666-671). This is mainly because noise or norm clipping treats all parameters uniformly. Since AdaSCP requires specific indicators with a large magnification factor, locating candidate indicators and filtering outliers after comparison with other updates would be more efficient and effective. This method is preferable to comparing the distance of all parameters, which tends to average out outliers and makes it difficult to filter out malicious clients. - Moreover, the experiment in Appendix A.2 shows that our backdoors diminish after 100 rounds of continued training with clean data, suggesting a potential mitigation strategy for future work. Lastly, since the triggers are implanted in the input noise, releasing only the generated results without exposing the model parameters is another viable defense. We will add a new section to discuss the potential defense mechanisms and mitigations of *DataStealing* backdoors. **Response to "Comparative Analysis":** Thank you for your suggestion. We have discussed AdaSCP in comparison with prior attack methods in Section 4.1 (Lines 263-281). In summary, Data Poison and PGD Poison encounter difficulties in effectively implanting multiple backdoors into diffusion models. Model Poison fails to overcome advanced FL defenses due to improper scale values. BC Layer Substitution is unsuitable for training diffusion models and results in training collapse. Our AdaSCP outperforms other methods by using critical parameters with adaptive scale factors, which balance stealth and efficiency and prevent training collapse in diffusion models. We will add a conclusion to Section 4.1 to help readers understand the advantages of our method. **Response to "Scalability and Generalizability":** Thank you for your suggestion. We conducted an additional experiment with the LSUN bedroom dataset, which has a higher resolution of 256x256. More details can be found in our global response and the PDF file. **Response to "Extended Analysis of Parameters":** Thank you for your suggestion. We will expand the analysis of the impact of various parameters in Appendix A. As for the proportion of critical parameters, the results in Appendix A.3 show that 0.4 is the trade-off point between the performance and critical parameter count. As for the patch size of ComboTs, the results in Appendix A.5 indicate that a 3x3 patch achieves the best performance. Smaller patches are insufficient to distinguish the triggers from the noise, while larger patches obscure the image and degrade performance. We will add more details to enhance the parameter analysis. **Response to "Complexity and Computation":** The complexity of ComboTs depends on the time of selecting the triggers from potential positions, as defined in Section 3.2 (Lines 123-125). For AdaSCP, the efficiency is influenced by the batch size and a hyperparameter, threshold $\mathcal{T}$, demonstrated in Appendix C.1 and Algorithm 4. The complexity primarily arises from the process of searching for critical parameters and identifying candidate indicators. The time consumption of AdaSCP is various with different hyperparameters and the model complexity. For example, when finetuning with the LSUN bedroom dataset, the resolution is 256x256, the batch size is 3 and the hyperparameter $\mathcal{T}$ is 0.05, the running time for finding critical parameters and candidate indicators is 2 minutes and 29 seconds. This process is conducted after all candidate indicators have been exhausted, as demonstrated in Algorithm 1. In our experiments, we set the number of candidate indicators to 10. Enlarging the number of candidate indicators is a way to increase the efficiency of our method. We will add more details for the complexity analysis in the revised version. **Response to "Presentation Improvements":** Thank you for your suggestion. To enhance the readability of our paper, we will polish the entire manuscript and move some details to the Appendix. **Response to "Limitations":** As discussed above, we will include an additional section on the proactive defenses against *DataStealing*. Due to ethical considerations and responsible disclosure, we will assist researchers and managers in preventing the exploitation of this vulnerability. Raising awareness about this issue can lead to the adoption of stricter security standards in the design and implementation of training diffusion models with FL, thereby minimizing the risks of data breaches and misuse. Additionally, we will release our code to encourage the development of more advanced privacy-preserving mechanisms. Furthermore, we will expand the discussion to cover the limitations and potential societal impacts on privacy-preserving technologies in FL and relevant industries. --- Rebuttal Comment 1.1: Comment: We appreciate the authors' patience and thorough rebuttal. After careful consideration and review, we decide to maintain the original score.
Summary: This paper studies the privacy risks of diffusion models in federated learning (FL) through the lens of data stealing attacks. The authors propose the ComboTs method to target a large amount of images. To further defeat advanced distance-based FL defenses, the authors propose the AdaSCP attack method which adaptively scales the critical parameters in the updating process. The evaluation results demonstrate the effectiveness of this attack. Strengths: - The first work to consider the privacy risks of diffusion models under the FL setting - Good performance with low MSE of the recovered images Weaknesses: - Problematic attack settings - Insufficient experiments and explanations Technical Quality: 2 Clarity: 3 Questions for Authors: This paper explores the data leakage risks of diffusion models in federated learning. The authors conduct the data stealing attack via backdoor attacks. To target a large amount of images and bypass the advanced FL defenses, the authors propose the ComboTs and AdaSCP methods, respectively. The evaluations showcase the effectiveness of the proposed data stealing attack. However, there are still some critical issues. - The attack settings are problematic. In this paper, the attacker is assumed to have access to the training dataset of the poisoned client, such that the attacker can insert the backdoor triggers into the target images, and finally the attacker will be able to recover the target images according to their backdoor trigger patterns. However, since the attacker already has access to the target images, why bother stealing them using the proposed complex methods? Just because of the so-called "Under strict data privacy protection, attackers can not upload or download images from the infiltrated client" in Section 3.1? This assumption does NOT make any sense. The attacker can even check every pixel value of the target image to locally reconstruct it without downloading it from the client. I would suggest the authors clearly re-explain the motivation and the settings of the attack. - The authors mentioned the FID metric to measure the performance of diffusion models. However, the authors did not clearly claim either higher or lower FID is better. Furthermore, when discussing the evaluation results, the authors only focused on the MSE score. I would suggest the authors elaborate more on this metric if they use it as an evaluation metric. - The authors only conduct the experiments on two image datasets with low resolutions. I would suggest the authors also evaluate the performance of their attack on another dataset with a much higher resolution, e.g., a subset of ImageNet. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See my detailed questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We address your concerns below: **Response to Q1:** 1) **Access to data does not mean it can be stolen.** - Federated Learning (FL) aims to protect data privacy by only sending model updates to the central server. In the FL setting, private data **cannot** leave the client. The assumption that an attacker can check every pixel value to locally reconstruct target images conflicts with the federated learning framework, where only model updates are transmitted, not raw data. - FL is often used to train models on highly sensitive or valuable data, such as medical records. In the era of deep learning, data is a critical asset and involves organizational security issues. According to *Article 5 and Article 32 of the European Union General Data Protection Regulation (GDPR)*[1], organizations are required to take strict measures to protect data privacy and security. For such highly protected data, some organizations enforce strict access controls, including banning USB drives, restricting network access, and disabling copy-paste, ensuring data cannot be downloaded from the client. In this environment, sending unauthorized messages would trigger security alarms. However, model updates are authorized for transfer in the FL system. Thus, it is practical that attackers can infiltrate the training process and implant backdoors to steal data but do not have the ability to extract data straightforwardly in the FL framework. The malicious goal can be secretly achieved by modifying the training code of the client without any risky transmission. 2) **An example to enhance FL trustworthiness and security.** - People might assume that advanced federated learning frameworks are sufficiently secure and overlook this potential danger. However, a false sense of security can be more dangerous and may be used for malicious purposes, such as espionage. Our work is the first to demonstrate that training diffusion models under the FL setting poses a threat of leaking thousands of images by implanting multiple backdoors, which control the generative behavior by only uploading model updates in FL. Our aim is to enhance the trust and security of FL and protect data privacy in the end. We will add more key details to the introduction to ensure that readers can more easily understand our setting in the revised manuscript. [1] Voigt, P., & Von dem Bussche, A. (2017). The eu general data protection regulation (gdpr). *A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10*(3152676), 10-5555. **Response to Q2:** Thank you for your suggestion. In Table 1, we have indicated that "&darr;" means lower is better. The FID (Fréchet Inception Distance) evaluates the quality of generated images by calculating the Fréchet distance between the feature distributions of generated and real images in a pre-trained Inception network. A lower FID indicates that the generated images are closer to real images, implying higher quality. The FID becomes very high when the diffusion model experiences training collapse, which mainly occurs in Model Poison and BC Layer Substitution due to improper scale rates and training approaches (Lines 278-281). As for AdaSCP, successful multiple backdoor attacks result in a slight increase in the FID score, averaging +10.31 on CIFAR-10 and +1.73 on CelebA compared to the pre-trained diffusion model, as demonstrated in Appendix F (Lines 663-666). This effect can be alleviated with longer training, as shown in Appendix A.1 (Lines 503-505). Since FID cannot directly reflect the performance of implanted backdoors, we primarily focus on comparing the MSE metric. More details about FID are provided in the Appendix. In the revised version of the main paper, we will add more details to help readers better understand the FID metric and the table results. **Response to Q3:** Thank you for your suggestion. We conducted an additional experiment with the LSUN bedroom dataset, which has a higher resolution of 256x256. More details can be found in our global response and the PDF file.
Rebuttal 1: Rebuttal: First of all, we would like to thank you for your time, constructive critiques, and valuable suggestions, which have greatly helped us improve our work. We are also grateful that the reviewers unanimously regard our work as novel and convincing. Below, we respond to the suggestions regarding experiments on additional datasets with higher resolution. We adopted the pretrained model provided by the official DDPM implementation, which is trained on the LSUN bedroom dataset with a 256x256 resolution. The *DataStealing* experiment is conducted by fine-tuning the pretrained model with a subset of the LSUN bedroom dataset, containing approximately 300,000 images. Considering the limited rebuttal time and the computational resources required for training, we set the batch size to 8 and the number of target images to 50. Each client trains on 500 images for one round. The EMA scale is decreased from 0.9999 to 0.999 for quicker convergence. The quantitative and qualitative results are shown in Table A and Figure A in our uploaded PDF file. We compare AdaSCP with several effective attack methods, such as Data Poison, Model Poison, and PGD Poison. The results show that AdaSCP still outperforms other methods in the *DataStealing* task under advanced FL defenses, further supporting our work. To address the lengthy 20-hour FID calculation time required for sampling 50,000 images at a 256x256 resolution, we used only 5,000 images, reducing the inference time to 2 hours. However, the limited training and inference time lead to higher FID values compared to other datasets. We will continue working on the LSUN bedroom dataset to provide more complete results in the revised manuscript. Thank you again for your valuable time. We sincerely look forward to further discussions with you. Pdf: /pdf/9b80e8e21b0d57d5272a472e901b4dfca83c39de.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations
Accept (poster)
Summary: The article studies classical stochastic control in the continuous d-dimensional SDE setting (with and without jumps). The authors suggest to compute the gradients of the value function (expected total reward in the non-discounted finite time regime), which can be applied (in a model-based setting) to policy gradient type algorithms popular in RL. While in a straight-forward fashion computing the derivatives of the expectations would lead to the simulation of n (dimension of approximation space for the control) expectations that require the simulation of one SDE each, the authors propose a clever interchange of derivatives that reduces the number of SDEs to a polynomial in d. Very careful proofs are given, the findings are applied to simple linear SDE example with quadratic cost function. Strengths: The article introduces a creative switching trick (switching order of derivatives and derivatives with expectations) that allows to reduce the simulation dimension compared to a straight-forward estimator. The rest of the paper consists of a rigorous verification of interchanging limits (derivatives) and expectations. The proofs are careful (mainly based on differentiability properties of SDE flows), I spotted only few typos. Weaknesses: I do not agree with the comparison to RL (=sample based stochastic control) in Section 1.1. The entire point of RL is to provide model-free estimators, in particular, the policy gradient theorem cancels out the model transitions. Here, the situation is different. The model ($\mu$ and $\sigma$) is used to write down the SDEs (2.7) required to simulate the estimator. This is not a problem but shifts the article more towards the stochastic control community. I cannot see a clear interest in the ML/AI community in the questions raised, the control setting of SDEs (jump SDEs) is natural in the probability/math finance community but not at all in ML/AI. Even if I try I struggle to come up with examples relevant for the ML/AI community (the authors neither). The article is written for a purely mathematical audience with research level skills in stochastic process theory, the article could appear almost unchanged in journals such as SPA or SIAM Journal on Control and Optimzation. I think NeurIPS is not the right place for publication. I am willing to increase my scores if a more relevant example than the quadratic problem could be provided that shows value of the switching trick to the ML/AI community. The appendix is chaotic. It is tricky to follow the proofs because parts of the proofs are deferred to other sections of the appendix. Given that the proof only justifies interchange of limits and expectations it looks much harder than it is. Technical Quality: 4 Clarity: 2 Questions for Authors: 1. You will typically have to solve the three SDES using numerical schemes. Can you provide additional assumptions on the model (the coefficients) that, combined with SDE numeric results (let's say in the Brownian case only), provides a result for the estimation error of the gradient? 2. The additional derivatives on the model coefficients make me doubt the SDE scheme (2.7) works well if the coefficients are not so nice. Many control problems in applications are Heston type models, which are already problematic for numerics with the square-root variance coefficient. I know they do not satisfy the assumptions imposed, did you still try your simulation scheme? 3. I do not understand why you included the jumps without an application in mind. This is normal for a maths journal, but a bit strange in ML/AI. Could you please provide an example? 4. A few minor points: (i) The generator of a Markov process is the action plus the domain. Could you please be a bit more careful when you rework the article? (ii) You might want to mention that uniformly bounded derivatives imply linear growth which is the standard property required for global existence. (iii) I am probably missing where you discuss properties on the model that imply Assumption 2. If not I find assumptions on the derivatives of the mean rewards disturbing that I do not know how to check. (iv) Typo in Assumption 1, 2. (v) At least for the larger expectations it would help to be more consistent with the use of brackets. (vi) Could you please put references to the quadratic loss examples? Since this is the only application it would be good to see where and how it is used. (vii) the use of colons is not very helpful, for instance in (2.6) it is a bit complicated to understand what object is defined (I guess you want to define Z). (viii) The chain rule is missing in (D.3) Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: The authors did not discuss the limitations or the practical problems of the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We hope that the following response will help to address some of the concerns. 1. **Applicability and interest to the ML community:** Recently, there has been significant interest in approximating continuous-time optimal control using neural networks and gradient methods, with applications in finance [1], scheduling of queuing systems [2], and stabilizing stochastic systems [3]. Our method offers a new scalable approach to approximate the optimal policies of these complex stochastic control problems. Additionally, our methodology can be applied to many ML-in-science contexts, including PDE-constrained optimization [4] and neural SDEs [5,6,7]. 2. **Stochastic control and RL:** We note that our literature review in Section 1.1 aims to acknowledge and compare previous research on Monte Carlo gradient estimation and its applications. We consider both policy gradient methods in RL and gradient methods in stochastic control to be important examples. However, we do not intend to claim that they are equivalent. We will clarify this by improving the writing in this section. 3. **Applications of jump diffusion models:** Regarding the reviewer's concern about the applicability of diffusions with jumps, we note that many models in the finance literature require the inclusion of jumps to accurately capture market behavior [8] (cited in the paper). Therefore, ML models for financial applications would benefit from the versatility of including jumps. Moreover, continuous-time Markov chain (CTMC) models are widely used in chemistry, biology, and physics to model the stochastic behavior of natural systems (see [7] and references therein, also cited in the paper). CTMCs are also a valuable tool for analyzing the behavior of discrete event systems. The jump-diffusion model considered in this paper captures a large class of CTMCs, underscoring the importance of including jump behavior in the SDEs we study. 4. **Assumptions on the smoothness of SDE coefficients:** We address the concerns regarding numerical error analysis and the applicability of our methodology when the SDE coefficients are not smooth in the "Author Rebuttal" section. In particular, we have tested our estimator for the CIR process with a square-root volatility, yielding results that are consistent with our expectations. We believe that the generator gradient estimator remains unbiased with finite variance for a wide range of non-smooth SDE models, and we support this claim with numerical experiments. We hope these additions will alleviate some of the concerns. 5. **Typos and the organization of the Appendices:** We thank the reviewer for their careful review and for pointing out typos and presentation issues within this paper. We will address these issues in the revision. Regarding the organization of the appendices, we aim to first present an overview of the important supporting results (presented as lemmas and propositions) and discuss why proving these results will imply the main theorems. We defer the detailed proofs of these supporting results to the appendices for interested readers. These proofs are typically rigorous checks for the uniform integrability of the pre-limit expectations and are lengthy due to the presence of many terms. We note that in order to get the correct polynomial growth order of the variance in Theorem 3, a careful analysis of the growth power of the parameters and rewards is required. 6. **Establishing Assumption 2:** Regarding the difficulty of checking Assumption 2, it can be established using a stochastic flow argument as in [Kunita 2019, Chapter 4] (cited in our paper) if the rewards are sufficiently smooth and the derivative processes are integrable. This can also be established using PDE arguments (see [Evans 2022], also cited in our paper). However, due to space limitations, we didn't include a full discussion of how this is done and instead presented it as an assumption. This is discussed in lines 196-201 after Assumption 2. Finally, we have decided to submit this manuscript for publication in the proceedings of NeurIPS to share our research output with a broader community. We believe that our methodology will find exciting applications beyond our domain of expertise. This aligns with the spirit of NeurIPS. We believe that these additional discussions and clarifications enhance the quality and clarity of the paper. If our response addresses your concerns, we kindly request that you consider raising your score. [1] Fan, Lei, and Justin Sirignano. Machine Learning Methods for Pricing Financial Derivatives. arXiv preprint arXiv:2406.00459 (2024). [2] Ata, Baris, and Ebru Kasikaralar. Dynamic Scheduling of a Multiclass Queue in the Halfin-Whitt Regime: A Computational Approach for High-Dimensional Problems. Available at SSRN 4649551 (2023). [3] Zhang, Jingdong, Qunxi Zhu, and Wei Lin. Neural stochastic control. Advances in neural information processing systems 35 (2022): 9098-9110. [4] Sirignano, Justin, Jonathan MacArt, and Konstantinos Spiliopoulos. PDE-constrained models with neural network terms: Optimization and global convergence. Journal of Computational Physics 481 (2023): 112016. [5] Tzen, Belinda, and Maxim Raginsky. Neural stochastic differential equations: Deep latent gaussian models in the diffusion limit. arXiv preprint arXiv:1905.09883 (2019). [6] Kidger, Patrick. On neural differential equations. arXiv preprint arXiv:2202.02435 (2022). [7] Jia, Junteng, and Austin R. Benson. Neural jump stochastic differential equations. Advances in Neural Information Processing Systems 32 (2019). [8] Merton, Robert C. Option pricing when underlying stock returns are discontinuous. Journal of financial economics 3.1-2 (1976): 125-144. --- Rebuttal Comment 1.1: Title: Thanks for answering my questions! Comment: Thanks for answering my questions! I see in the other reviews good interest in PDE constraint optimization. I cannot judge this direction and it seems this is already largely reflected in the scores of other reviewers. From the point of view of RL and also Math Finance I am still not convinced. ML has not proved much relevance in finance and I do not believe it will in the future. There are plenty of numerical approaches towards affine processes (CIR was really tortured with all possible techniques). Without seeing a proper comparison I cannot believe that the (super inefficient!) policy gradient approach can add any value in this model-based setting. It could in the model-free setting which is not possible with the present approach. I still do not understand why there is no proof for Assumption 2 (in the appendix). If you can prove it, prove it. And identify clearly a set of necessary conditions. Also I do not see a reason not to include a numerical analysis, this would make the paper much stronger. The contribution of only the nice (!) main idea and justifying some changes of limits that everyone in the ML community would believe anyways is a bit too small for NeurIPS, given a practical need is not completely clear. I will keep my score. --- Reply to Comment 1.1.1: Title: Comments to the reviewer's feedback Comment: We thank the reviewer for your quick feedback. We have the following response. **Application of Machine learning in finance:** We respectfully disagree with the comment that machine learning methods have "not proved much relevance in finance." In fact, the use of artificial neural networks for financial SDE modeling and optimization dates back to the early 1990s. Survey paper [9] summarizes **more than 150 papers** that use neural network-parameterized SDEs to model prices in option pricing and hedging. Additionally, recent papers surveyed in [9] actively explore the use of deep network architectures. This underscores the relevance of our proposed methodology in cutting-edge finance applications. Furthermore, based on the private consulting activities of some of the authors, neural networks and other machine learning techniques are very much alive and of significant interest in financial applications. Given these academic interests and industry experiences, we believe it is hard to argue otherwise. *We respectfully ask why you think that ML has not proven to be relevant in finance.* **Affine processes:** We do not intend to use our method to optimize simple affine processes. Our methodology is designed for estimating sensitivity or optimizing values driven by high-dimensionally parameterized jump-diffusions. Our method excels in modern ML settings where deep neural networks are used in the SDE parameters. **Application in sciences and engineering:** Other than financial applications, our methodology is well-motivated and an important contribution considering its application in science and engineering. For example, neural SDEs aim to learn the behavior of natural (physical) random processes with parameterized SDEs. Highly cited papers in this field include [7] and [5] (with and without jump). Neural SDEs typically optimize $\min_\theta E[g(X_\theta^x(T))]$ subject to an SDE $$ dX_\theta^x(t) = \mu_\theta(t, X_\theta^x(t)) dt + \sigma_\theta(t, X_\theta^x(t)) dB(t) + \chi_\theta(t,X_\theta^x(t))dN(t)$$ where $\mu_\theta,\sigma_\theta,\chi_\theta$ are neural network parameterized and $N(t)$ is a Poisson process. Gradient methods for approximating stochastic optimal control with a parameterized policy class [3] are also important applications. We remark that many neural network-parameterized SDEs are not widely used primarily because of the lack of scalable gradient estimators. Our contribution directly addresses this issue. Given these numerous applications across multiple disciplines, we find it **hard to believe** that NeurIPS, where practitioners and theorists, scientists and engineers exchange ideas and advance machine learning theory and applications, is not a suitable venue to publish this work. **Rigorous validation of limit interchange:** We respectfully disagree with the claim that the interchange of limits is what "everyone in the ML community would believe anyways." Wrong intuition regarding the interchange of limits can lead to serious errors. A researcher might explore the following gradient estimation idea: From semigroup theory, write $E_x[g(X_{\theta}(t))] = (e^{tL_{\theta}}g)(x)$. Formal differentiation (taking the limit quotient) yields: $$\partial_\theta (e^{tL_{\theta}}g)(x) \stackrel{?}{=} t e^{tL_{\theta}} \partial_\theta L_{\theta} g(x) = t E_x[\partial_{\theta}L_{\theta} g(X(t))].$$ Although this can work for Markov jump processes, it is incorrect, unfortunately, for diffusions. The correct expression is precisely the representation in our Theorem 1. The reason readers find the ideas in Section 2 natural is that we have invested research effort into identifying a simple and intuitive justification, all backed by rigorous proof. Nevertheless, given this helpful discussion with the reviewer, we think including this incorrect reasoning in the paper could help to illustrate the need for caution when working with generators of SDEs. **Assumption 2:** We have considered including sufficient conditions on the model primitives to imply Assumption 2 but decided against it for the following reason: The primary focus of this paper is to rigorously justify the **validity** of the generator gradient estimator. Proving Assumption 2 using sufficient conditions could detract from this goal. The differentiability and moment bounds for the value are well-established in the literature. Also, justifying Assumption 2 from model primitives would introduce another set of assumptions typical in jump-diffusion analysis but potentially confusing for users, as we have explained in the previous response. If you still have concerns about establishing Assumption 2, we can provide a proof in the Appendix with smooth and bounded rewards (so that there will be no interference with the variance's growth power). [9] Ruf, Johannes, and Weiguan Wang. Neural networks for option pricing and hedging: a literature review. Journal of Computational Finance.
Summary: The paper proposes a novel algorithm for the gradient of machine learning objectives that are based on SDE paths, w.r.t. parameters. The method is scalable for big parameters, as the computational complexity of the estimation is not related to the number of parameters. Empirical results show the scalability advantage over classical pathwise differentiation. Strengths: PDE constrained optimization has remained a very difficult question due to the high cost of simulating PDEs, let alone taking derivatives. Therefore, despite its many good theoretical promises, it has not been deployed. This paper attacks the question by reducing the computational cost with a monte-carlo estimation of the derivative, largely advances the promise of PDE constrained optimization. The writing is also very good, and the intuitions behind the methods are well illustrated. Weaknesses: 1. Despite the fact that the computational complexity is irrelevant to the number of parameters, I suspect that as the number grows, the method would require more samples for an accurate monte-carlo estimate of the gradients. Is it possible to either: 1. Provide a theoretical analysis on the convergence rate of the estimation w.r.t. the number of parameters. 2. Provide an empirical analysis on the convergence rate of the estimation w.r.t. the number of parameters. Technical Quality: 4 Clarity: 4 Questions for Authors: See weakness. Also, applying the Feynman-Kac twice requires the objective to be very smooth, can you provide some insights on the applications where this holds true? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: As written in questions, the smoothness requirements induced by Feynman Kac etc. will limit the use cases. As written in weaknesses, it is still unclear the convergence w.r.t. high number of parameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. We hope that the following response can address some of your concerns: 1. **Limitations induced by the assumptions on the smoothness:** In the "Author Rebuttal" section, we address concerns regarding the clear presentation of the limitations due to requiring additional smoothness in the SDE parameters, as well as the proof techniques we can adopt to circumvent the theoretical challenges when smoothness in the SDE parameters is lacking. This is also confirmed by our additional empirical demonstration using the CIR process and SDE with ReLU drift. These new experimental results suggest that our generator gradient estimator remains consistent with finite variance even when the SDE parameters are not smooth, indicating a wide applicability of our proposed methodology. Regarding the application of the Feynman-Kac formula, we use a probabilistic proof based on martingale methods. This approach requires applying the Feynman-Kac formula once to obtain the martingales in Lemma 1, taking the limit of the difference quotient, and proving uniform integrability to get the $\theta$ gradient. The introduction that uses PDE techniques is only for intuitive understanding. We note that probabilistic tools can sometimes yield weaker smoothness requirements than functional analytic methods. That said, our approach does require that the value function is a classical solution (with possible relaxation that the second derivative is continuous almost everywhere) to apply Itô's lemma and the Feynman-Kac formula (see Assumption 2). Nevertheless, Assumption 2 can be established using a stochastic flow argument as in [Kunita 2019, Chapter 4] (cited in our paper) if the rewards are sufficiently smooth and the derivative processes are integrable. It can also be established using PDE arguments (see [Evans 2022], also cited in our paper). Due to space limitations, we did not include a full discussion of how this is done and instead presented it as an assumption. This is discussed in lines 196-201 after Assumption 2. 2. **Convergence properties as the number of parameters increases:** The variance behavior of our algorithm, when the number of parameters becomes very large, could be a concern. Therefore, we included Figure 2 in Appendix F.2, showing histograms of the standard errors of each of the gradient coordinates under a wide range of $n \in [10^2, 10^8]$. We see that the tail of the standard errors grows from about 25 when $n \approx 100$ to about 200 when $n \approx 10^7$. We believe that this is not a prohibitive increase in variance, considering that the number of parameters is many orders of magnitude larger. This, coupled with the sketched error analysis in the "Author Rebuttal" section, suggests that the increase in variance is manageable. Additionally, the generator gradient estimator has a lighter tail compared to the pathwise differentiation estimator, suggesting a uniformly better variance performance in this setting. However, we should note that the standard error distribution depends on the $\theta$ where we evaluate the neural network. Here, we use the default initialization provided by PyTorch. Even though the generator gradient estimator doesn't need to simulate an SDE of dimension linear in $n$, we still need to evaluate the neural network and compute $\nabla_\theta \mu_\theta$ and $\nabla_\theta \sigma_\theta$, which could be costly if $n$ is large. However, upon investigation, we find that our proposed estimator $D(x)$ evaluates $\nabla_\theta \mu_\theta$ and $\nabla_\theta \sigma_\theta$ only once. In contrast, using pathwise differentiation requires evaluating $\nabla_\theta \mu_\theta$ and $\nabla_\theta \sigma_\theta$ at each time step to simulate $\nabla_\theta X_\theta^x(t)$. We believe this is why our estimator has a very stable computation time even when $n \approx 10^7$. 3. **Error bounds:** As we have explained in the "Author Rebuttal" section, the typical worst-case upper bounds produced by analyzing the numerical SDE schemes, applied to the derivative processes, while reassuring the validity of the proposed estimator, cannot explicitly demonstrate the variance and bias dependence on the dimensions. Moreover, we believe that the error's dependence on these dimensions is instance-specific, depending on the SDE and reward functions. These dependencies are technically challenging to explicitly reflect in an error bound. We hope that the previous empirical validation of the variance behavior can alleviate your concern. We believe that these additional discussions and clarifications enhance the quality and clarity of the paper. Additionally, we are confident that our methodology will attract interest from various research communities, and sharing our work at NeurIPS will help achieve this goal. If our response addresses your concerns, we kindly request that you consider raising your score. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my score as is.
Summary: This paper formulates an efficient, unbiarsed, and finite variance gradient estimator for an objective function that looks like the stochastic optimal control cost function (it is ubiquitous across various applications). The problem concerns overparametrized SDEs (the paramete dimension n r is of a much higher dimension than the state space d). The algorithm runs at a complexity invariant to the large n. The authors use the Feynman-Kac PDE for v_{\theta} and transform it to an equivalent PDE to solve. They then employ a pathwise differentiation estimator to estimate the 2nd order and 1st order derivatives in the PDE while exploiting certain properties of the Hessian. Strengths: 1. The applications of gradient estimators and the motivation behind this problem is well stated and supported. 2. The methodology is clear, especially when finding probablistic representations of the gradient estimator. Weaknesses: 1. The authors discuss the limitations in the checklist but it would have been more clear to have a limitations section clearly outlined as claimed in the checklist. 2. The assumptions may be restrictive as mentioned. 3. The work could be improved if there was a numerical simulation. The example of LQR is an illustrative example and it may be of didactic use and potentially relatively easy to implement a scenario when the parametrization is of a much higher dimension. 4. The spatial derivatives \partial v_0 are still difficult to compute, but still better than \partial v_{\theta}. What are some error bounds you can get by using the later derived probabilistic representations that can use Monte Carlo Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Are there any error bounds from using the probabilistic representations for the spatial derivatives (see #4 in Weaknesses) Happy to revise score if necessary. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes they have mentioned some, rest may be in the above Weakness section unless there are some misunderstandings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and questions. We organize our responses as follows 1. **Limitations induced by the assumptions on the smoothness:** Thanks for the suggestion of including a separate discussion of limitations to avoid misunderstanding. In the "Author Rebuttal" section, we address concerns regarding the clear presentation of the weaknesses induced by requiring additional smoothness in the SDE parameters, as well as the proof techniques we can adopt to circumvent the theoretical challenges when smoothness in the SDE parameters is lacking. This is also confirmed by our additional empirical demonstration using the CIR process and SDE with ReLU drift. These new experimental results suggest that our generator gradient estimator remains consistent with finite variance even when the SDE parameters are not smooth, indicating a wide applicability of our proposed methodology. 2. **Error bounds:** As we have explained in the "Author Rebuttal" section, the typical worst-case upper bounds produced by analyzing the numerical SDE schemes, applied to the derivative processes, while reassuring the validity of the proposed estimator, cannot explicitly demonstrate the variance and bias dependence on the dimensions. Moreover, we believe that the error's dependence on these dimensions is instance-specific, depending on the SDE and reward functions. These dependencies are technically challenging to explicitly reflect in an error bound. 3. **High-dimensional parameterization:** For the LQG example, we have tested it in an extremely overparameterized setting where $n$ the dimension of $\theta$ is more than $10^7$. In this setting, our estimator outperforms the pathwise differentiation estimator in computation time as well as variance. We can also provide a demonstration of running gradient descent using our estimator, plotting the terminal performance of the control policy. If this could resolve your concern, please let us know. We believe that these additional discussions and clarifications enhance the quality and clarity of the paper. Additionally, we are confident that our methodology will attract interest from various research communities, and sharing our work at NeurIPS will help achieve this goal. If our response addresses your concerns, we kindly request that you consider raising your score. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for the clarifications here and in your above rebuttal. With regards to point 3., I think it would be beneficial to have this demonstration. I will keep the score as it is
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and questions. Your feedback provided valuable suggestions and directions for improvement. Below is a brief summary highlighting the main enhancements and changes we will be making. **Limitation and Generalization** We will add the following concluding remarks section to clarify the limitations of our theoretical results for the proposed generator gradient (GG) estimator. Additionally, we will address the possibility of extending our theoretical results to prove the consistency of the GG estimator when the smoothness assumptions (Assumptions 1 and 3) are violated. >The theoretical results in this paper have the limitation of requiring second-order continuous differentiability and uniform boundedness of the space derivatives of the parameters of the underlying jump diffusion. These strong conditions, which are standard in the literature of stochastic flows (cf. [Protter 1992] and [Kunita 2019] cited in the paper) to guarantee the global existence and uniqueness of the derivative processes in (3.5), are necessary to achieve the generality of the results presented in this paper. > > However, our generator gradient estimator often works even when coefficients are not continuously differentiable. This is true if the generator and rewards gradients are defined almost everywhere, and the derivative processes in (3.5), with almost everywhere derivatives of the SDE parameters, exist for every $t \in [0, T]$ and satisfy some integrability conditions. Examples include neural networks parameterized stochastic control with ReLU activation functions, heavy-traffic limits of controlled multi-server queues, and the Cox–Ingersoll–Ross (CIR) model. For these models, the existence and integrability of the derivative processes can be checked on a case-by-case basis, allowing the consistency and unbiasedness of the generator gradient estimator to be established. We confirm this by numerically investigating the CIR process and an SDE with ReLu drift in Appendix G.}} As promised in the concluding remarks, we conducted additional numerical experiments using the CIR process: $$X_\theta^x(t,s) = x + \int_t^s(\theta - X_\theta^x(t,r))dr + \int_t^s\sqrt{X_\theta^x(t,r)}dB(r)$$ and the ReLU drift SDE: $$X_\theta^x(t,s) = x + \int_t^s(\mathrm{ReLU}(\theta X_\theta^x(t,r)) + 1)dr + \int_t^s dB(r),$$ where $x, X_\theta^x(t,s), \theta \in \mathbb{R}$. We used the proposed GG estimator and the finite difference derivative estimator (a consistent gradient estimator for both cases, though it performs poorly in high-dimensional settings) to evaluate the gradient $\partial_\theta v_\theta(0,x)$ of $$v_\theta(0,x) = E\left[\int_0^T X_\theta^x(t)dt\right]$$ evaluated at different $\theta > 0$. Numerical results in Tables 3 (CIR) and 4 (ReLU) in the attached supplement PDF indicate that in both cases, the GG estimator is consistent even if the global smoothness of the SDE parameters is violated. However, when $\theta < 1/2$, the CIR process is known to touch 0, making the derivative process in (3.5) only defined up to the first hitting time of 0. Therefore, the previously discussed generalized conditions don't hold. Although the numerical results in Table 3 still show consistency, we observe a surge in variance, suggesting that the GG estimator doesn't work reliably for the CIR case when $\theta < 1/2$. **Error Analysis of Numerical SDE Schemes** We believe it would be of significant theoretical value to demonstrate that the error of our estimator scales well with dimensions. However, it is challenging to investigate the convergence property of the algorithm in terms of the dimensions of $\theta$ and $x$. Below, we include a brief error analysis and explain why we decided against including it in the paper. In our numerical experimentation, we use Euler schemes to approximate the SDE, which introduces a weak error (bias) of $O(\delta)$, where $\delta$ is the time-step discretization (cf. [1]). Typical analysis of the Euler scheme (also see [1]) results in a strong error of $O(\sqrt{\delta})$. Consequently, the second moment of our estimator using the Euler scheme can be bounded by $$E|D_\delta(x)|^2 \leq E|D_\delta(x) - D(x)|^2 + E|D(x)|^2 \leq C_1\delta + C_2(|x|+1)^{2m+4},$$ where $D_\delta(x)$ is the GG estimator from SDEs simulated using the Euler scheme, and the bound for the second term follows from Theorem 3. Here, $C_1$ would depend on $T$ and the boundedness of the space derivatives (up to second order) of $\nabla_\theta\mu_\theta$, $\nabla_\theta \sigma_\theta$, $\nabla_\theta\chi_\theta$, $\nabla_\theta\rho_\theta$, and $\nabla_\theta g_\theta$. $C_2$ depends on $T$ and the boundedness of space derivatives (up to second order) of $\mu_\theta$, $\sigma_\theta$, $\rho_\theta$, $\chi_\theta$, and $g_\theta$. These bounds are typically very loose in real-world examples and do not provide new insights into the validity of the GG estimator. Additionally, rigorously presenting this error analysis would require introducing a significant amount of new notation. Therefore, due to clarity and space limitations, we have decided not to include such an analysis. [1] Kloeden, P.E., Platen, E. (1992). Numerical Solution of Stochastic Differential Equations. Applications of Mathematics, vol 23. Springer, Berlin, Heidelberg. Pdf: /pdf/29d0c1eeb9adb6f05b80a54e1511279f06a14182.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
Accept (poster)
Summary: The paper explores the novel use of large language models (LLMs) as gradient priors in a zero-shot setting. The primary focus is on lossless gradient compression, which is essential for distributed learning applications. The paper introduces LM-GC, a method that integrates pre-trained LLMs with arithmetic coding to transform gradients into a text-like format that LLMs can process efficiently. This approach significantly improves compression rates, surpassing existing state-of-the-art lossless compression methods by 10% to 21%. The method also shows compatibility with lossy compression techniques like quantization and sparsification. Strengths: 1. The paper leverages pre-trained LLMs in an unconventional way, using them as priors for gradient compression, demonstrating creativity and novelty. 2. The proposed method, LM-GC, consistently outperforms traditional compression methods (PNG, FLAC, GZIP, LZMA) across various datasets and architectures. 3. Extensive experiments validate the effectiveness of LM-GC, including ablation studies and compatibility analyses with other compression techniques. 4. The paper discusses potential applications of LLMs in gradient denoising, differential privacy training, and adversarial gradient detection, highlighting broader impacts and future research directions. Weaknesses: 1. The method incurs significant computational overhead, with compression times being relatively high. 2. The approach involves multiple hyper-parameters, such as context window size and byte grouping, which require careful tuning for optimal performance. 3. The integration of LLMs with arithmetic coding and the serialization process might be complex to implement and replicate. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How well does LM-GC generalize to tasks and models not covered in the experiments? Are there specific scenarios where this approach might not be as effective? 2. How scalable is the proposed method for extremely large models or distributed training setups? Are there any known limitations in this regard? 3. How sensitive is the performance of LM-GC to different serialization strategies? Could there be more optimal serialization methods not explored in this paper? 4. What specific techniques or optimizations could be employed to reduce the computational overhead and improve the throughput of LM-GC? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: see the questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and detailed feedback. We are also encouraged by the recognition of our novelty and creativity. We now address the comments and share our vision for further efficiency improvement below. ---- ***Computation overhead and potential mitigation*** We have discussed several potential strategies in the general response. In summary, there are three possible directions: advanced implementation, efficient LLMs, and hardware acceleration. - **Advanced Implementation**: This improves our current research-oriented single-thread Python implementation. - **Efficient LLMs**: This is an ongoing research direction aimed at developing more lightweight LLMs. - **Hardware Acceleration**: This leverages the zero-shot property and is intended for production use. ---- ***Hyper-parameter selection*** Interestingly, our findings suggest that most hyperparameters align with human intuitions. For example, 4 bytes per group resemble floating point structures and yield the best performance, while hexadecimal encoding outperforms iso-8859-1 due to the more common characters used. Other parameters generally adhere to the scaling law, where larger models or context windows result in better performance, creating a trade-off between performance and available resources. These factors significantly reduce the complexity of hyperparameter search. ---- ***Implementation efforts*** Our implementation can be easily reproduced using common open-sourced packages like PyTorch, Huggingface, and TorchAC, while serialization can be done using pure Python string operations. We will release the source code upon publication. ---- ***Generalizability*** Our approach performs well on common benchmarks, including MNIST, CIFAR-10, and TinyImageNet, and on architectures like ConvNet, VGG, ResNet, and ViT for image classification. Further analysis may be needed for other tasks, such as generative models, particularly if different implicit biases are present. To the best of our knowledge, this remains an open question, and we will leave this analysis for future work. ---- ***How scalable is the proposed method for extremely large models or distributed training setups?*** We currently target scenarios like federated/distributed learning, where gradients are compressed before sending to the server. We tested on models up to 6M parameters, a common model size for edge-device scenarios. Though the typical parameter size of LoRA fine-tuning is also around 6M, further analysis would be required on those extremely large models. ---- ***How sensitive is the performance of LM-GC to different serialization strategies?*** Table 1 shows that LLMs are sensitive to serialization strategies. For instance, hexadecimal numbers are up to 70% better than the extended ASCII set (iso-8859-1), and separators play a critical role in the reasoning ability of LLMs (40% difference for LLAMA 2). ---- ***Could there be more optimal serialization methods not explored in this paper?*** Yes, there may exist better strategies. This could be similar to the existing multi-modal LLM problems, where the key challenge is to project a new modality to the token space. While we propose an easy-to-go serialization method, understanding how and why LLMs can understand gradients deserves further research.
Summary: This is an interesting paper that tries to solve an important problem using large language models. Authors propose to use LLMs to compress the gradients with no number representation loss. They explain their method clearly by marrying LLMs with arithmetic coding. Strengths: This paper opens new views to LLMs as a tool in compression and coding. Gradient compression as stated in the paper is largely ignored. They prove LLMs can be used effectively as a prior for gradients in a zero-shot setting. Weaknesses: I would suggest that authors think about a proper theory to be added in the revised paper, or in the appendix. This combination is new and I suggest that some theoretical properties be developed in addition to experiments. Technical Quality: 4 Clarity: 4 Questions for Authors: Heavy tail priors are observed in training of deep models. Bring more context about number formats when the inherent distribution does not have expectation or variance, for instance Student-t distribution with 1 or 2 degrees of freedom. Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: When LLM is not a good tool for gradient compression? Under what circumstances they fail to present the prior. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and constructive feedback. We especially appreciate the recognition of our novelty and our efforts to address this timely question. We are pleased to share additional insights below. ---- ***Theoretical properties*** Thanks for the suggestion. We agree that theoretical analysis could be insightful. The fundamental theories behind entropy coding have been well-studied in the existing literature. On the other hand, understanding the behavior of LLMs is a non-trivial ongoing research direction. We will leave it to future work at this point. ---- ***Clarification on the question*** We are not entirely clear on the question. We would be more than happy to address it during the discussion phase if the reviewer could kindly provide further clarification. ---- ***When is LLM not a good tool for gradient compression?*** The main limitation of our approach is its high computational resource demand, making it less suitable for weaker devices such as IoT devices or real-time systems. However, this constraint could be significantly improved with the techniques discussed in our general response, enhancing scalability. Furthermore, our method does not require any training, making it especially well-suited for hardware acceleration in production deployments. ---- ***Under what circumstances do they fail to present the prior?*** Our approach performs well on common benchmarks, including MNIST, CIFAR-10, and TinyImageNet, and on architectures like ConvNet, VGG, ResNet, and ViT for image classification. Further analysis may be needed for other tasks, such as generative models, particularly if different implicit biases are present. To the best of our knowledge, this remains an open question, and we will leave this analysis for future work.
Summary: The paper proposes LLMs as compressors for gradients of a neural network. The target usecase is a distributed learning setting, where the gradient updates need to be compressed before being shared. The gradients are converted to hexadecimal representation and the LLM's outputs are used in an arithmetic coder to generate the compressed representation. Post rebuttal edit: Improved rating 3->5 Strengths: The idea is certainly novel. I do not know of prior works using LLMs to compress numerical data. It works better than the baseline methods as demonstrated in the experiments across multiple model architectures and image datasets. The LLM's don't require any dataset-specific training therefore they can serve as a general prior for floating-point datastreams. Weaknesses: The method lacks comparison to proper baseline methods. > Line 195: "We compare our method to state-of-the-art lossless compression techniques that originally targeted different data types." There needs to be justification why there isn't a comparison to a baseline method that is targeted at a stream of floating point values (or even specialized to gradients). I don't see significant advantages for using LLMs (or methods designed for image/audio compression) over a method specialized the task. * There should to be a comparison to a simple adaptive entropy coder such as run-length coding or LZW. * There should to be a comparison to a static codebook + arithmetic coding. Here are a few other works that specialize in compressing floating point values: * https://computing.llnl.gov/projects/fpzip Fast and Efficient Compression of Floating-Point Data * https://ieeexplore.ieee.org/document/1607248 Fast lossless compression of scientific floating-point data Besides, it is also clearly impractical to use LLMs for this task due to their computational costs. The LLMs use excessively more compute than the bandwidth savings could justify. Technical Quality: 1 Clarity: 3 Questions for Authors: Figure 4: What does 'plain' compression entail here? Is it using a codebook with arithmetic coding? Confidence: 3 Soundness: 1 Presentation: 3 Contribution: 2 Limitations: The solution proposed is extremely resource-intensive for the given task, which may have societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and insightful feedback. We hope our response addresses the concerns and conveys the broader vision of our work. We kindly emphasize that our primary contribution is demonstrating **the potential of pre-trained zero-shot large language models (LLMs) as effective gradient priors through the lens of lossless compression**, a point also supported by R-UCGG and R-Br5V. As motivated in L21-23, the lack of proper prior models has been a critical barrier for applications such as denoising or upsampling in the domain of gradients. Our work is the first to demonstrate this potential and explore its application in lossless gradient compression. We believe our findings will inspire further research beyond just arithmetic coding. The potential impact also outweighs the temporal computation overhead as we have discussed in the general response. We now address the concerns regarding the lossless gradient compression task below. ---- ***Justification of using LLMs.*** We first emphasize that traditional baselines struggle to compress gradients effectively in complex scenarios. As illustrated in Tables 2 and 3, our method exceeds the best baseline by 20% on ViT and 21% on TinyImageNet. This highlights the difficulty of modeling complex gradients and justifies the usage of our LLM prior. The following experiments recommended by the reviewer further stress the point. ---- ***Missing baselines.*** We have compared our method to LZMA, one of the most robust **general-purpose** compression methods, commonly used in the 7z compression format. Additionally, it is important to note that **none of the suggested baselines are designed for gradients or high-dimensional data**. However, to further highlight our contribution, we have conducted additional experiments as follows. ---- **Comparison to simple adaptive coding** We additionally compare our method to run-length encoding (RLE). This experiment extends from Table 3, compressing gradients collected during training a ConvNet on TinyImageNet. For RLE, we consider three types of dictionaries: binary, hexadecimal ($H_n$, Table 1), and iso-8859-1 (extended ASCII to handle negative numbers). These methods use 1, 4, and 8 bits to represent symbols and always use 8 bits for counting. Note that this setting is favorable to RLE since gradient lengths can easily exceed 256 (8 bits). The results are presented in the following table. RLE failed to compress the data even with different codebooks, and our method clearly outperforms RLE, indicating that simple adaptive priors are ineffective for gradients. | RLE (bits) | RLE ($H_n$) | RLE (ISO) | LM-GC ($H_s$) | |:----------:|:-----------:|:----------:|:-------------:| | 450.28±0.3 | 278.08±0.2 | 198.57±0.0 | **71.90±0.0** | ---- ***Comparison to methods dedicated to floating-point compression*** While we acknowledge that the suggested baselines are intended for scientific floating points (1D to 4D, which are relatively low-dimensional compared to gradients [1]), we have extended the comparison in Table 2 and 3 to include floating-point-specific compression. Specifically, we compare our method to FPZIP [1], as we could not find any available implementation of [2]. The results, shown in Table A1 and A2 in the attached PDF, indicate that FPZIP performs comparably to LZMA. On the other hand, our LM-GC outperforms FPZIP by up to 17.2% across three datasets and around 20% across four architectures. This highlights the challenge of using heuristically designed priors and demonstrates the effectiveness of the LLM priors adopted in our method. ---- ***Practicability of LM-GC*** As previously discussed, the effectiveness of LLMs in handling complex gradients justifies their usage. We acknowledge that LM-GC requires more resources than traditional codecs. However, as detailed in the general response, significant acceleration can be achieved by implementing C++ and multi-threading. Additionally, ongoing research into more efficient LLMs suggests a promising future for further improvements. Notably, since our method does not require training, hardware can frequently achieve substantial acceleration for inference. ---- ***What does 'plain' compression entail here? Is it using a codebook with arithmetic coding?*** Plain means without any lossless compression. We will revise the text in the next version to avoid confusion. ---- We hope our response has taken care of your concerns and encouraged you to reconsider the score. We would be happy to answer any other questions or concerns during the discussion phase. [1] Fast and Efficient Compression of Floating-Point Data https://computing.llnl.gov/projects/fpzip [2] Fast lossless compression of scientific floating-point data https://ieeexplore.ieee.org/document/1607248 --- Rebuttal Comment 1.1: Title: Thank you for your reply. Comment: Thank you for your reply and for providing further experimental measurements. It helped to alleviate my concerns regarding the performance improvement. In my opinion, it is very interesting to see how well LLMs perform compared to traditional coding methods. Showing that there is a significant gap will help to inform future gradient compression algorithms and provide a strong baseline. We can study LLMs to figure out why they perform so well and where the current algorithms are lacking. I am still convinced that the use of LLMs for this purpose is impractical, and will remain that way for the foreseeable future, but the paper's findings have value for the research community. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and re-evaluation. We will definitely include the discussion and the new baselines in the next version.
Summary: This paper studies the potential of LLMs to act as gradient priors in zero-shot settings. The property is evaluated on lossless gradient compression. The proposed method is able to surpass existing compression methods and improve compression rate by 10% to 21% across various datasets and architectures. Strengths: 1. This paper explores the potential of LLMs as gradient priors through the lens of lossless gradient compression. Given the massive model size and training datasets, LLMs are encoded with many capacities that can be used as tools in domains that are out of the text domain, such as optimizers https://arxiv.org/abs/2309.03409, and compressors https://arxiv.org/abs/2309.10668. This paper is one of these, which itself is fun and interesting. 2. The effectiveness of the methodology design such as Serialization and Compression is backed up with promising results. Weaknesses: 1. While this paper itself is fun, one concern remains what are the benefits of using LLM as a compressor if traditional approaches can sufficiently solve the problem? It is better to report the energy costs of LLMs and traditional baselines such as PNG, FLAC, LZMA, and GZIP to allow readers to have a full picture. 2. What are the potentials of this paper's finding? Whether the compressed gradient can be utilized in practice? for instance, how to explore this to train neural networks? Technical Quality: 3 Clarity: 2 Questions for Authors: please see the above weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback and are encouraged that our work was found interesting. We will now address the comments and share further vision of our work. ---- ***The benefit of using LLMs over traditional approaches.*** We highlight that traditional baselines struggle to compress gradients effectively in complex scenarios. As illustrated in Tables 2 and 3, our method exceeds the best baseline by 20% on ViT and 21% on TinyImageNet. This highlights the difficulty of modeling complex gradients and showcases the effectiveness of our LLM prior. Moreover, the baselines often fail to compress data adequately in some scenarios. For instance, LZMA, the best baseline, achieves only a 91% compression rate. These advantages demonstrate the benefits of our method, making it particularly favorable when communication cost or storage is the main system bottleneck. ---- ***Energy consumption*** We provide the energy consumption estimation in Figure A2 of the attached PDF. The consumption is estimated by the following equation: $ E = (\text{CPU power consumption} \times \text{CPU utilization} + \text{GPU power consumption} \times \text{GPU utilization}) \times \text{run time} $ Indeed, LM-GC currently consumes more energy. However, as discussed in the general response, we remain positive about future efficiency improvements. Moreover, in complex scenarios, the baselines often struggle to compress the data effectively (70.98% vs. 87.98% in Tables 2 and 71.90% vs. 91.6% in Table 3), which justifies our method's investment in computational resources. ---- ***Can the compressed gradient be utilized in practice?*** Yes. An immediate application is federated learning, where clients accumulate gradients locally and communicate periodically with the central server. One of the main bottlenecks is the communication cost as the number of clients and model size grow. Assuming both sender and receiver have access to the same LLM, LM-GC (optionally with lossy compression, as shown in Figure 4) provides superior compression rates on gradients over the existing coding schemes (Table 1-3) and thus enables better scalability. ---- ***Broader impact of our findings*** In addition to LM-GC, our work highlights the potential of using large language models (LLMs) as gradient priors, which we find exciting and believe will inspire further applications. We propose several possible directions for exploration, which could be investigated in fully zero-shot, parameter-efficient fine-tuning (PEFT), or prompting settings, given the strong starting point provided by zero-shot LLMs. - **Inspiration from existing works**: With a robust prior $p(t_k|\texttt{BOS}, t_{<k})$ over gradient tokens, it is possible to draw inspiration from existing work of different modalities. For example, one could utilize the prior model to de-noise gradients by checking the likelihood, akin to image denoising with generative priors, or explore gradient restoration from quantized low bits to higher bits, similar to image super-resolution. In the latter scenario, the lower bits provide partial information or context for the LLMs. - **Advanced lossy compression**: Most of the existing lossy compression in federated learning assumes no prior knowledge exists between servers and clients or leverages heuristically designed protocols. In contrast, one can assume both sides have access to a fixed LLM. When certain tokens consistently exhibit high probability, they can be sampled directly from the LLMs, allowing more bits to be allocated to other tokens. - **Security**: By modeling benign gradients, potentially through PEFT or prompting, one could develop a gradient-based method to guard against backdoor attacks, akin to image anomaly detection using generative adversarial networks (GANs).
Rebuttal 1: Rebuttal: # General Response \ We thank all reviewers for their time and constructive feedback. It is encouraging to hear that our work has been found both fun and interesting (R-Br5V) as well as creative (R-UCGG). We particularly appreciate the recognition from all reviewers of the novelty and effectiveness of our method across various datasets and architectures (R-Br5V, R-KkqK, R-Nmkh, R-UCGG). We will begin by recapping the importance of our work and then address the comments in each thread, respectively. ---- ### ***Importance of our work*** Our study is the first to investigate the applications of *general-purpose* LLMs in the field of *gradients*. In addition to traditional NLP tasks, we show for the first time that LLMs can act as a robust gradient prior model and comprehend gradients *without requiring fine-tuning or demonstrations*. This discovery will expand the potential uses of general-purpose LLMs and inspire further applications involving gradients, as discussed in L21-23. For instance, our LM-GC, an immediate application of LLM priors in lossless gradient compression, presents superior performance, especially in complex scenarios that traditional methods struggle to model. Despite the room for improvement in our implementation, our research opens up an entirely new direction that has never been explored. We are confident that our contribution outweighs these temporary computational burdens. With that being said, we have noticed several potential strategies for further efficiency improvement. Many of them are still ongoing research problems. We anticipate that both the field and our work will mutually benefit in the near future. - **Advanced Implementation**: Our current implementation is designed for research purposes and is a single-thread Python program that processes each window of gradients sequentially. Given that the average GPU utilization is around 10% and each window is independent, the implementation can be significantly improved by utilizing multi-threading and C++. This could potentially yield a 10 times improvement in speed. - **Efficient LLMs**: Accelerating LLMs is a popular research problem. Techniques such as quantization [1], KV caches [2], flash attention [3], and pruning [4] are already being explored. One promising but underexplored direction is distillation [5]. While pre-trained language models are powerful, we do not need all of their functionalities for our task. Extracting the desired functions (e.g., gradient priors) from LLMs might be a significant step toward efficient LM-GC. - **Hardware Acceleration**: LM-GC does not require any training. Inference of networks can often be significantly accelerated by AI-specific hardware such as NPUs, TPUs, or other application-specific integration circuits (ASICs). [1] Lin, Ji, et al. "AWQ: Activation-aware Weight Quantization for On-Device LLM Compression and Acceleration." Proceedings of Machine Learning and Systems 6 (2024): 87-100.\ [2] Hooper, Coleman, et al. "Kvquant: Towards 10 million context length llm inference with kv cache quantization." arXiv preprint arXiv:2401.18079 (2024).\ [3] Shah, Jay, et al. "Flashattention-3: Fast and accurate attention with asynchrony and low-precision." arXiv preprint arXiv:2407.08608 (2024).\ [4] Ma, Xinyin, Gongfan Fang, and Xinchao Wang. "Llm-pruner: On the structural pruning of large language models." Advances in neural information processing systems 36 (2023): 21702-21720.\ [5] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015). Pdf: /pdf/aade14da1252d48628bd2631ef80ad116160124f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved Particle Approximation Error for Mean Field Neural Networks
Accept (poster)
Summary: The manuscript investigates the mean-field Langevin dynamics (MFLD) and improves the particle approximation error by removing the dependency on the log-Sobolev inequality (LSI) constant. This is of relevance, as the LSI constant in general might deteriorate with the regularization coefficient, limiting the impact of prior bounds. The Authors illustrate the applicability of their result with three examples. 1. An improved convergence of the MFLD, 2. sampling guarantees for the final stationary distribution, 3. uniform-in-time propagation of chaos w.r.t. the Wasserstein distance for the MFLD. Strengths: - Improving the estimates of the particle approximation of the MFLD is of interest given its appearance in the learning problem of mean-field neural networks, which has been an interesting research topic pursued by different research groups during the last years. The paper is of purely theoretical nature and improves upon the state-of-the-art by removing the dependence of the approximation error constant on the LSI constant. The technical tools to do so, seem novel. - The paper is well-written and -structured with the objectives and contribution clearly stated and pursued. - In my opinion, the content of the paper could be also of interest beyond the scope of the MFLD, as propagation of chaos results with favorable constants are of interest in a wide variety of fields. The Authors may want to consider commenting on this. Weaknesses: Apart from some minor questions addressed below, there is one point of critique that I would like to raise. Namely, the Authors do not really motivate why one should expect that the particles approximation _does not_ depend on the LSI constant. This, in my opinion, would improve the reading experience of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - line 61: How does the LSI constant $\alpha$ deteriorate with the regularization parameter $\lambda$? Could you provide a sketch to give some more information? This could be also worth to be included and referenced in the manuscript. And some more minor comments: - In line 28, the Authors may want to add one further line of work, namely "Mean field analysis of neural networks: A law of large numbers" and "Mean field analysis of neural networks: A central limit theorem" by J Sirignano, K Spiliopoulos, to exhaustively cover the literature. - In regards of (2), the Authors might want to mention that $\nabla \frac{\delta \mathcal{F}}{\delta \mu}$ is also known as the Wasserstein gradient $\nabla_W \mathcal{F}$. - lines 35, 36: Could you add references here? - lines 99, 100: Write $\mathcal{P}(\mathbb{R}^d)$. - line 100: Not sure what you mean with "it follwos that" here - line 157: $N$ not $d$ - line 234: identical to - lines 266-268: This sentence sounds a bit complicated. - line 301: Discussion Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The Authors do point out some limitations in the Conclusions, which are reasonable and in my opinion justifiable. They could further emphasize some limitations of their result following from Assumption 3. In particular the uniform boundedness of $h$ seems to be a restriction, but since the class of covered neural networks is still fair, I would not consider it as a substantial limitation. Yet, it could be highlighted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback and useful suggestions. **The Authors do not really motivate why one should expect that the particles approximation does not depend on the LSI constant.** The particle approximation error is due to the nonlinearity of $F_0(\mu)$ with respect to $\mu$. In fact, Langevin dynamics, a special case of MFLD for the linear functional $F_0$, can be simulated with only one particle. On the other hand, LSI-constant is affected by the regularization strength $\lambda, \lambda'$ and the boundedness or Lipschitz continuity of $F_0( \mu_\mathbf{x})$ or $\delta F_0( \mu_\mathbf{x}) /\delta \mu$ w.r.t $\mathbf{x}$. For instance, in the learning mean-field neural network setting (Eq. (14)), LSI is satisfied as long as the activation $h$ is bounded. In this way, these two concepts (approximation error and LSI) come from different factors and do not seem to have a direct relationship, and hence we expect that an LSI-constant free approximation error can hold. **Q: line 61: How does the LSI constant deteriorate with the regularization parameter? Could you provide a sketch to give some more information? This could be also worth to be included and referenced in the manuscript.** Let us consider the Gibbs distribution proportional to $\exp( f(x) - g(x) )$, where $g$ is $c$-strongly convex and $f$ is uniformly bounded by $B$. Then, combining techniques of [Bakry and Émery (1985)] and [Holley and Stroock (1987)], we can prove LSI with an estimated LSI constant: $c \exp(-4B)$. Please also refer to Appendix A.2 of [Nitanda, A., Wu, D., and Suzuki, T. (2021)]. In our case of the proximal Gibbs distribution, $f$ and $g$ correspond to $- \frac{1}{\lambda} \frac{\delta F_0(\mu)}{\delta \mu}$ and $g = \lambda' \|x\|\_2^2$, respectively. Hence we can conclude that LSI constant is $\frac{2\lambda' }{\lambda} \exp(- 4B/\lambda )$, as described on line 61 if $\|\frac{\delta F_0(\mu)}{\delta \mu}\|_\infty \leq B$. This estimation is often used in the literature. For instance, please see [Nitanda et al., 2022; Chizat, 2022; Suzuki et al. 2023b]. Moreover, $f$ could be Lipschitz continuous function because we can reformulate "the sum of Lipschitz continuous function and strongly convex function" into "the sum of bounded function and strongly convex function" by using Miclo's trick [Bardet et al., 2018]. We will mention these techniques in Appendix. [Bakry, D. and Émery, M. (1985)] Diffusions hypercontractives. In Seminaire de probabilités XIX 1983/84, pages 177–206. Springer, 1985. [Holley, R. and Stroock, D. (1987)] Logarithmic Sobolev inequalities and stochastic ising models. Journal of statistical physics, 46(5-6):1159–1194 **Other suggestions** Thank you for your thorough reading. We will update the manuscript and fix typos according to your comments. --- Rebuttal Comment 1.1: Comment: I thank the Authors for their reply. Since my comments were sufficiently answered, I remain with my initial positive evaluation.
Summary: The paper presents an improved finite particle approximation error bound for mean-field Langevin dynamics. The main result establishes a $O(1/N)$ gap between the original and $N$-particle objective which is independent of the LSI constant and improves upon the existing $O(\lambda/\alpha N)$ bound. The is applied to obtain improved convergence error and Wasserstein propagation of chaos of MFLD for shallow neural networks. Strengths: The independence of the particle discretization error on the LSI constant is a strong and useful result, especially in low temperature regimes where $\alpha$ can be exponentially large w.r.t. $\lambda^{-1}$. The proof technique of introducing the induced Bregman divergence to bound nonlinearity is both novel and simpler compared to previous analyses. The new error is easily incorporated into existing MFLD frameworks. Weaknesses: * Besides Theorem 1, the remaining results seem to be a straightforward application to the existing analysis of time discretization of MFLD. In particular, the $\exp(-\alpha\lambda\eta k)$ convergence rate and $\eta/\alpha\lambda$ error term of [1] remain and hence the overall rate and error bound still suffer from the LSI constant, albeit decoupled from particle error. * The corresponding Wasserstein error bound also automatically suffers from the LSI constant due to using Talagrand's inequality. While the constant is improved from $\alpha^{-2}$ to $\alpha^{-1}$, the difference seems to be incremental as $\alpha$ is already adversarially exponentially dependent on $\lambda^{-1}$. * The standard assumptions of MFLD apply, for example the activation must be Lipschitz smooth as well as bounded to ensure that Assumption 1 holds. These implications should be explained in the text alongside Assumption 3 as currently Assumption 1 is only given for an abstract functional and not for the specific form (14). [1] Suzuki et al, 2023. Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction. Technical Quality: 4 Clarity: 2 Questions for Authors: * Currently the dependence of the convergence rate on $\alpha$ in the MFLD framework may be unavoidable, however is there a possibility that the time discretization error or Wasserstein error can also be made independent of the LSI constant? Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 2 Limitations: The authors address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback and helpful comments. **Besides Theorem 1, the remaining results seem to be a straightforward application. The convergence rate $\exp(-\alpha \lambda\eta k)$ and error bound $\eta/\alpha \lambda$ still suffer from the LSI constan.** As the reviewer pointed out, the results in Section 3.2 are basically obtained by the combination and modification of existing results. However, we would like to emphasize that our main aim is to derive an improved particle approximation error, and the results in Section 3.2 serve to demonstrate the effectiveness of our result. The fact that we can improve the approximation error of existing results by combining them with our result would be a point worth noting. **The Wasserstein error bound suffers from the LSI constant. Q: however is there a possibility that the time discretization error or Wasserstein error can also be made independent of the LSI constant?** At the moment, it is nontrivial whether the time discretization error of MFLD and Wasserstein propagation chaos can be independent of the LSI constant. However, it is worth mentioning that we can derive a propagation of chaos regarding TV norm, which is free from the LSI constant. This can be proven by replacing Talagrand's inequality with Pinsker's inequality $\mathrm{TV}(\mu , \mu') \leq \sqrt{2 \mathrm{KL}(\mu \| \mu')}$. The convergence in terms of TV norm is also important as it implies the weak convergence of distributions. We will add this comment in the revision. **The standard assumptions of MFLD apply, for example the activation must be Lipschitz smooth as well as bounded to ensure that Assumption 1 holds. These implications should be explained in the text alongside Assumption 3.** Thank you for the suggestion. Indeed, the smoothness and boundedness of the activation function can be used for verifying Assumption 1. We will update the manuscript accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I will maintain my positive assessment of the paper.
Summary: This paper provides an improved bound on the particle approximation error of MFLD under conditions of a log-Sobolev inequality (and a couple of extra regularity conditions). This bound improves on previous bounds by removing the dependence of the bound on the LSI constant. The authors then apply their bound to get improved convergence rates of MFLD in the finite-particle setting, sampling guarantees for $\mu_*$ and a uniform-in-time propagation of chaos result analagous to previous literature. Strengths: - This work addresses an interesting problem - the convergence of MFLD - and provides a clear improvement over the previous best bounds in this domain. The proof method is also novel. - The work is extremely clearly presented, with all of the key contributions being well-motivated and explained. The relevant literature is also well summarized. - Overall, I consider this to be a very sound piece of theoretical work. Weaknesses: I have no major concerns about this work. The bound consists only of a quantitative improvement on existing bounds, rather than a completely novel bound, and still requires the same log-Sobolev assumptions as previous works, perhaps limiting its general applicability. In addition, they also assume bounded activation functions, which is an additional restriciton. However, I consider both of these restrictions to be minor. Technical Quality: 4 Clarity: 4 Questions for Authors: I have no clarifications to request. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors do provide a (quite brief) discussion of the limitations of their work in the final section. Beyond this, I foresee no other potential negative societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback and thoughtful comments. **About bounded activations** In fact, the boundedness of the activation function is commonly assumed in the literature. Hence, it is not a critical limitation. That being said, we can relax the boundedness for each factor. For instance, the boundedness of $h(\cdot,z)$ can be replaced with Lipschitz continuity for deriving the LSI constant and can be replaced with the bounded second moment of $h(X,z)~(X\sim \mu_*)$ for evaluating Bregman divergence. The most critical point to be careful about is the relaxation for Assumption 1. Indeed, Assumption 1 with an unbounded activation function imposes another limitation: the output layer must be fixed, and the derivative of the loss function $\ell$ must be bounded (e.g., logistic loss). --- Rebuttal Comment 1.1: Comment: Thank you for the additional information - I think my original assessment is unchanged.
Summary: This work proves a novel particle approximation error for the continuous- and discrete-time mean-field Langevin dynamics (MFLD), which removes the dependency on the log-Sobolev inequality (LSI) constant in the number of particles, leading to a potential exponential in inverse-temperature improvement in the number of particles required. This new bound is used to improve prior optimization, sampling, and uniform-in-time propagation of chaos guarantees for the MFLD. Strengths: The Bregman divergence based technique of analyzing the particle discretization error is novel and interesting, and can inspire further research in this line of work. The intuition behind the proof is also nicely explained in the main text. Weaknesses: Currently my main concern is the verification of Assumption 4 in standard settings. The authors cite Lemma 22 of Kook et al., 2024 to obtain a uniform-in-N LSI constant for ${\mu_*}^{(N)}$. However, in the new version of Kook et al., 2024, Lemma 22 only proves an LSI constant for the proximal Gibbs measure. In the new version, Corollary 25 does prove a uniform-in-N LSI constant for $\mu^{(N)}_*$, but this requires $N$ to be exponentially large in the inverse temperature $1/\lambda$, which is what this paper tried to avoid. I would be happy to increase my score if this issue is resoled. Additionally, minor comments and questions are provided in the “Questions” section below. Technical Quality: 2 Clarity: 3 Questions for Authors: * What is the main challenge in handling unbounded activations? Furthermore, does the boundedness of the first variation in Assumption 1 play an important role, or can it be removed as the bound does not enter any of the estimates? * In Line 162, the authors refer to Equation (11) as the continuous-time propagation of chaos error, while Equation (11) only considers the error at optimality, and the continuous-time propagation of chaos result is introduced in Theorem 2. This might need more clarification. * In Assumption 3, $\ell$ is said to be Lipschitz smooth, while in fact its gradient is assumed to be Lipschitz. It might be more clear to refer to $\ell$ as simply $L$-smooth. * The step size appearing on the RHS in Part 1 of Theorem 2 might be a typo, since it is a continuous-time statement. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately discussed the limitations of their work. The main limitation is the fact that the number of iterations and therefore the computational complexity still depend on the LSI constant, which make the algorithm inefficient in low noise regimes in the worst case. Another limitation is assuming bounded activations, which could potentially be relaxed with a more detailed analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and feedback. **Verification of Assumption 4 and Lemma 22 of [Kook et al., 2024]** Thank you for bringing up this point. Indeed, the authors of [Kook et al., 2024] updated their manuscript due to an error in Lemma 22 of the previous version. As a result, a new $N$-independent LSI constant (Corollary 25 of [Kook et al., 2024]) requires a strong assumption on $N$. Therefore, we plan to simply replace it with the LSI constant: $\alpha \geq \frac{\lambda'}{\lambda}\exp(-NB/\lambda)$, supported by Holley-Stroock argument under boundedness of $F_0$. Although this LSI constant depends on $N$, it is useful in practice because we basically want to use the minimum number of $N$ to achieve a given accuracy. In fact, an LSI constant $\frac{\lambda'}{\lambda}\exp(-NB/\lambda)$ is rather better than the original constant in between lines 204-205 of the submission when $N \leq 1/\lambda'$, which aligns with practical machine learning settings. Specifically, for a given number of examples $n$ and $\gamma>0$ (e.g., $\gamma=1/2, 1$), we usually set $\lambda' = 1/n^\gamma$, and our particle approximation bound also suggests an error of $1/n^\gamma$ when $N=n^\gamma$, which indeed satisfies $N \leq 1/\lambda'$. In other words, under this typical machine learning setting, the estimated LSI-constant $\alpha \geq \frac{\lambda'}{\lambda}\exp(-NB/\lambda)$ is better than the LSI constant (lines 204-205) based on the previous version of [Kook et al., 2024]. Moreover, we confirmed that our proof technique can be used to improve their LSI result (Corollary 25 of [Kook et al., 2024]) so that it holds any $N$. Indeed, the assumption on $N$ of Corollary 25 stems from Wasserstein propagation of chaos (Theorem 26), which inherently requires $N \geq 1/\alpha$ (see also Theorem 5). On the other hand, our propagation of chaos result holds for any $N$, and hence, we can remove this condition from Theorem 26. However, our aim is not to improve the LSI but to derive an LSI-constant free particle approximation error. Therefore, the improvement of the LSI will be studied in future work with further refinement, and we will just utilize $N$-dependent LSI for our submission. **The main challenge in handling unbounded activations** The boundedness is due to several factors: Lipschitz smoothness of the first variation (Assumption 1), the LSI-constant by Holley-Strook argument, and the evaluation of Bregman divergence. That being said, we can relax the boundedness for each factor. For instance, the boundedness of $h(\cdot,z)$ can be replaced with the Lipschitz continuity for deriving the LSI-constant and with the bounded second moment of $h(X,z)~(X\sim \mu_*)$ for evaluating Bregman divergence. The most critical point to be careful about is the relaxation for Assumption 1. Indeed, Assumption 1 with an unbounded activation function introduces another limitation: the output layer must be fixed, and the derivative of the loss function $\ell$ must be bounded (e.g., logistic loss). We also would like to note that the boundedness of the first variation in Assumption 1 simply says the differentiability of $F_0$. This seems redundant, so we will omit it in the revision. **In Line 162, the authors refer to Equation (11) as the continuous-time propagation of chaos error, while Equation (11) only considers the error at optimality** Thank you for pointing it out. Indeed, Eq. (11) only considers the error at the solution. We will simply reward this sentence as follows: "The finite-particle approximation error $O(\frac{\lambda}{\alpha N})$ appearing in (11), (13) means ..." Moreover, we will update the manuscript according to your suggestion and fix the typo. Thank you for pointing them out. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses and clarifications. My concern about the LSI constant estimate after Assumption 4 is resolved, therefore I have increased my rating to 6.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CLIPAway: Harmonizing focused embeddings for removing objects via diffusion models
Accept (poster)
Summary: This work addresses the object removal problem with a simple and effective embedding arithmetic strategy. This idea is implemented by combining the object semantic understanding ability from alpha-clip and the generative ability from the text-to-image diffusion models. One good property is that the proposed method is a plug-and-play strategy that can be adopted in various Diffusion-based t2i frameworks. Compared to the previous GAN-based and Diffusion-based inpaint methods, this method demonstrates better results with suitable background and fewer artifacts. Strengths: The simple and effective embedding arithmetic strategy is intuitive and suitable for the object removal problem. The designed technical framework of the alpha-clip plus t2i diffusion model is flexible and powerful since many powerful conditional image generative models are based on clip embeddings. It would be interesting to see how this technique can be applied to other t2i frameworks. Figure 5 demonstrates the superiority of the proposed method which performs a good trade-off between FID and CLIP Scores. Figure 6 also shows good inpaint results with fewer artifacts and blurry effects. The paper is easy to follow and the algorithm has been illustrated step-by-step. I highly recommend the authors release code and model for the following works. I believe it can benefit the related CV fields. Weaknesses: It seems that the applicable image resolution is limited by the diffusion method. Comparisons on higher resolutions could demonstrate the boundary of this technique since one benefit LaMa-like methods is the applicability to high-resolution images. One popular baseline of “LaMa + SDinpaint” should be compared, please check https://github.com/Mikubill/sd-webui-controlnet/discussions/1597 More failure cases could be presented and analyzed to show the limitations. I suppose this method can be extended to other frameworks like SDXL, it would be great to perform these extensions to make this work more comprehensive. Technical Quality: 3 Clarity: 3 Questions for Authors: “LaMa + SDinpaint” is required to be compared. Comparisons on higher resolutions could be demonstrated. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: More failure cases could be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback which help us improve the paper further. **Applicability to Higher Resolution Images**\ We appreciate the reviewer's insight regarding the applicability of our method to higher resolution images. As noted, the current limitation is inherent to existing diffusion-based image inpainting methods. Our primary goal was to enhance object-removal performance within these existing constraints. Future research could indeed focus on adapting our method for higher resolutions, potentially bridging the gap between diffusion-based techniques and LaMa-like methods. This would involve significant advancements in diffusion model architectures to handle high-resolution images effectively. **Comparison with “LaMa + SD-Inpaint” Baseline**\ Thank you for suggesting the comparison with the “LaMa + SD-Inpaint” baseline. This is a very interesting work. We conducted this comparison using the same setup as in Table 1 of our manuscript. The results are as follows. | Method | FID | CMMD | CLIP Dist | CLIP@1 | CLIP@3 | CLIP@5 | KID | |-------------------|------|------|-----------|--------|--------|--------|--------| | SD-Inpaint | 59.21| 0.54 | 0.75 | 70.45 | 57.14 | 49.88 | 0.0145 | | LaMa | 65.76| 0.81 | 0.66 | 78.34 | 64.42 | 56.85 | 0.0195 | | LaMa + SDinpaint | 51.33| 0.45 | 0.75 | 72.29 | 57.61 | 50.01 | 0.0117 | | SD-Inpaint + CLIPAway | 54.93| 0.48 | 0.80 | 82.36 | 71.68 | 63.28 | 0.0095 | Incorporating SD-Inpaint as a post-process significantly improved sample quality, as indicated by the improvement in the FID score. However, the object removal metrics reveal a decline in the model’s ability to effectively remove objects. We attribute this decline to SD-Inpaint’s tendency to insert and hallucinate objects, as noted in our paper. We also would like to mention that in KID, which is the other visual quality metric, CLIPAway model achieves better scores in addition to the significantly better scores on the object removal CLIP based metrics. We appreciate the reviewer’s suggestion, which helped us highlight the trade-offs involved in using SD-Inpaint in combination with LaMa. We will include these results in our manuscript to provide a more comprehensive evaluation of our pipeline. We also add visual results of this comparison in the attached Rebuttal Fig. 6. It can be seen that LaMa fills the erased area in a blurred way, and the SD-Inpaint, followed by LaMa, adds objects again. For example, in the first row, a horse is added to the image. **Analysis of Failure Cases**\ We appreciate the reviewer's suggestion to present and analyze more failure cases to highlight the limitations of our method. One significant limitation of our method is the constraint on applicable image resolution as noted by the Reviewer, which is inherent to diffusion-based inpainting methods. Specifically, the model in a Stable Diffusion (SD) pipeline expects input images and masks to be in the 512x512 pixel resolution, while SD-XL pipelines expect 1024x1024 resolution. Deviating from these resolutions degrades the performance of the UNet in the diffusion pipeline, resulting in less effective inpainting, regardless of the quality of the embeddings provided by the CLIPAway pipeline. We will include a detailed discussion of these limitations in our manuscript to provide a comprehensive evaluation of our method. **Potential Extensions to Other Frameworks**\ We appreciate the reviewer's suggestion to apply our method to SDXL. After examining the SDXL-Inpaint model, we observed that the issue of object addition is even more pronounced. SDXL recommends setting the strength parameter to 0.99, using the unerased image as input, and introducing noise at this strength (for reference: https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1). The strength parameter determines how much the masked portion of the reference image is transformed, ranging from 0 to 1. A higher strength adds more noise to the image, requiring more denoising steps. When the strength is set to 1, maximum noise is added, leading to degradation in image quality as noted on the webpage's limitations, with ongoing work to address this issue. Fig. 7 of the Rebuttal PDF illustrates that using the original input image with a strength of 1.0 results in lower output quality. Conversely, a strength of 0.99 often causes the model to regenerate the erased foreground object, as it retains some information about it. This setting yields a CLIP-1 score of 67.22, the lowest among those reported in Table 1 of the main paper. When combined with our CLIPAway method, the SDXL-Inpaint model achieves a CLIP-1 score of 85.91. We will include comprehensive experiments with SDXL in the revised manuscript. We believe that eliminating the reliance on unerased images will improve results for both SDXL-Inpaint and our approach. Lastly, we thank the reviewer for recommending that we open-source our code and for finding our work beneficial. We are pleased to announce that the code and model are ready and will be released soon. --- Rebuttal 2: Title: Thanks for the rebuttal Comment: I have carefully reviewed the authors' rebuttal and the other reviewers' comments. The authors have adequately addressed my initial concerns. I would recommend adding the additional baseline comparisons and SDXL experiments in the final version. --- Rebuttal 3: Title: Thank you. Comment: We would like to thank the reviewer for their thorough review and constructive feedback. We will certainly incorporate the additional baseline comparisons and SDXL experiments into the final version of the paper as recommended.
Summary: The paper identifies a common limitation in recent diffusion model-based inpainting methods: unintended hallucinations of the removed object. To address the problem, the paper introduces CLIPAway, a plug-and-play module that does not rely on any specific training. Using vector arithmetic, it successfully obtains an embedding that predominantly focuses on the background. Through comparisons, it is demonstrated that the proposed method outperforms existing methods in object removal. Strengths: - The paper identifies a well-known limitation in the current diffusion models and the motivation behind this paper is very strong. - The proposed method does not rely on synthetic data or any special annotations during its training, thus having a stronger ability to generalize and more robust performance compared to prior methods (Fig 8). - The idea of using AlphaCLIP encoder and doing vector subtraction is simple but effective. - Extensive results (Tab 1 and user study) are presented to support the conclusions of the paper, especially on whether the object is correctly removed. Weaknesses: - Although the proposed model can effectively remove the object, the quality of the inpainted image is not stable. There are notable artifacts in Fig 1, such as the shadow in the 1st, 2nd rows on the right, and 3rd, 4th rows on the left. - The conclusions / observations in Fig 4 lacks enough evidence. E.g., in the bottom-left, it's hard to tell whether the foreground or the background is more prominent. Adding more visual results / quantitative comparisons may help to obtain a more solid/reasonable observation. - There have been a lot of evaluations on whether the object is correctly removed (in Tab 1), but should be more on assessing the visual quality, e.g., LPIPS. Technical Quality: 3 Clarity: 2 Questions for Authors: - In lines 192-193, when setting up the baselines, the authors used empty prompt or "background" as input. In practice, providing an accurate prompt describing the background can also be useful in removing the object. E.g., when removing a laptop from a table, the prompt can be "an empty table". Does this work for the DM-based baselines? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Already addressed. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. **Eliminating Shadows**\ We appreciate the reviewer’s concern about shadows in the inpainted images. Since our method uses SD-Inpaint as its backbone, inpainting is performed only on the masked regions, which ensures that only the areas the user wants to remove are altered, while the unmasked areas are preserved. LaMa, MAT, and ZITS++ follow a similar approach. To address the issue of shadows, we conducted additional experiments where both the objects and their shadows were included in the masks. As shown in the Rebuttal PDF (Fig. 3), this adjustment significantly improves the results by effectively removing both the objects and their shadows, leading to cleaner and more accurate inpainted images. This way, only the areas the user wants to remove are altered, the user can choose to keep the shadow for some artistic designs or choose to remove it. **Additional Visual Evidence**\ We thank the reviewer for their suggestion to provide more visual evidence. We agree that additional examples would better demonstrate the efficacy of our method. To address this, we have included more visual results in the Rebuttal PDF (Fig. 2) that show the behavior of foreground-background, and projected embeddings. Specifically, we observe that the foreground embedding is dominated by the foreground object, while the foreground object appears at a smaller scale in the conditional generation results of the background embedding. These new examples highlight the effectiveness of our projection block in removing objects of interest. We will update our submission to include these new results, strengthening our observations and making them more convincing. **Improving Visual Quality Assessment**\ To provide a more comprehensive assessment of visual quality, we have included Kernel Inception Distance (KID) as an additional photorealism metric as given below (lower is better): | Method | KID | |----------------------------|------| | ZITS | 0.0208 | | MAT | 0.0278 | | LaMa | 0.0195 | | Blended | 0.0362 | | Blended + CLIPAway | 0.0194 | | Unipaint | 0.0360 | | Unipaint + CLIPAway | 0.0199 | | SDInpaint | 0.0145 | | SDInpaint + CLIPAway | 0.0108 | As shown in the table, KID results align with the photorealism metrics reported previously. We did not use LPIPS because it necessitates reference images with the objects removed, which are not available. With the addition of KID, we now have three metrics to evaluate visual quality, complementing FID and CMMD, which were previously reported in the quantitative comparisons in the main paper. **Prompt Use in Baseline Comparisons**\ We understand that using a specific prompt, such as "an empty table" when removing a laptop, can sometimes improve results. However, we found that this approach does not consistently improve the baselines. Sample results demonstrating this are included in the Rebuttal PDF (Fig. 4). For example, "empty table top" prompt results in inpainting the region with adding an additional component on top of the table rather than completing it directly in the first row, "an empty runway" still adds planes to the image, or "an empty bed" adds a pillow to the bed. Additionally, generating specific prompts for every situation in a large dataset is impractical and time-consuming. Our method circumvents this issue by using vector arithmetic in the CLIP space, eliminating the need for custom prompts. We believe this makes our method more robust and user-friendly. We hope the reviewer finds this perspective agreeable. --- Rebuttal 2: Comment: Thank you for providing extensive additional results in the rebuttal, they do improve my understanding of the proposed method and fully resolve my concerns. I raised my score, and hope the additional results can be included in the paper. --- Rebuttal Comment 2.1: Comment: We would like to thank the reviewer for their constructive review and for raising their score based on the additional results provided in the rebuttal. We're glad to hear that the new information has addressed the reviewers concerns. We will certainly include these additional results in the final version of the paper.
Summary: This paper proposes an approach that aims to tackle object removal in stable diffusion models. The paper utilizes AlphaCLIP embedding (that are trained with an additional alpha layer mask to enable incorporation of regions of interest), and techniques such as IP-Adapter that decouples the diffusion unet cross-attention mechanisms to accept features such as coming from image/text separately. The authors train using image prompts as inputs and controlling the alpha channel to correspond to background or foreground (inverse of background) focus to guide the attention mechanism. Eq 2 aims to calculate embedding that are orthogonal to the foreground embedding, resulting in embedding that are focusing on the background. Strengths: - The method leverages existing work well, and the proposed method appears to indeed remove objects in a smoother way than competitors -It is sometimes difficult to judge from a few examples qualitatively, but it seems that the proposed method performs relatively better in most cases presented - ablation studies show the impact of background/foreground embedding - although in most cases it seems that background and foreground embedding do contain both. The orthogonal embedding do appear to include only information related to the background - although that does not exactly match the background of the image Weaknesses: - As with the shadows example mentioned in the limitations, the method does not seem to remove objects based on much context - i.e. region of interest is a stronger control. This may lead to undesirable or unrealistic results - so perhaps a trade-off between strict ROI adherence and context could be employed. - The method can be plug and play after training - but the training process itself can make it specific to architectural choices - Although mentioned in the paper, this work appears to have a larger computational overhead that simpler inpainting methods, which perhaps can then be improved with simpler post-processing - the paper is focused on object removal in a background-consistent manner, which is a limited task, while some compared methods tackle the more general problem if image inpainting Technical Quality: 3 Clarity: 3 Questions for Authors: please see above Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. We appreciate the reviewer's recognition of our method's ability to leverage existing work effectively and its superior performance in removing objects smoothly compared to competitors. To address the concern regarding the qualitative judgment based on a few examples, we will extend our supplementary material with additional qualitative examples to better demonstrate the effectiveness of our method. Additionally, we would like to emphasize that the quantitative analysis provided in Table 1 shows we achieve consistent improvements over many baselines. We appreciate the reviewer's comments on our ablation studies showing the impact of background and foreground embeddings. While background and foreground embeddings may sometimes contain overlapping information, our method aims to orthogonalize these embeddings to effectively isolate relevant background information. The orthogonal embeddings are designed to include only information related to the background. Although these embeddings are not expected to exactly match the original background of the image, they are constructed to exclude the foreground object and provide a contextually coherent background representation. To further clarify and validate the effectiveness of our approach, we provide additional examples in the Rebuttal PDF (Fig. 2) which show the behavior of foreground-background, and projected embeddings. We will add more examples in the revised paper. We would like to thank the reviewer for bringing this to our attention. **Regarding Contextual Understanding**\ Since we use SDInpaint as our backbone, inpainting is carried out only on the regions identified by the provided masks as happens in other competing methods as well, LAMA, MAT, ZITS++. To address the reviewer's concern about the method not removing shadows, we conducted new experiments where both objects of interest and their shadows were included in the corresponding masks. We share these results in the Rebuttal PDF (Fig. 3). Our method produces highly satisfactory results, effectively eliminating the objects and their shadows if the user prefers to. **Clarification on "Plug and Play" Capability**\ We apologize if the use of the term "plug and play" caused any confusion. Our method bridges the AlphaCLIP embedding space to the CLIP embedding space through projections, which requires some architectural decisions during the training process. However, once trained, our CLIPAway module can be integrated into any framework that employs the CLIP embedding space or other CLIP embedding spaces with similar properties — without requiring additional training. As demonstrated in Table 1, adding CLIPAway to SD-Inpaint, Blended Diffusion, and Unipaint frameworks significantly improves their performance on the object removal task. **Computational Overhead and Efficiency**\ While diffusion models do have higher computational overhead compared to GAN models, they are currently favored due to the significant improvements they offer. Additionally, there is extensive research focused on optimizing diffusion models and reducing the number of required denoising steps to improve their efficiency. Therefore, we believe it is important to continue improving these models for various tasks. . We are not aware of any straightforward post-processing technique that can elevate simpler inpainting methods to a state-of-the-art level of quality. Reviewer GcHN brought the LaMa + SD-Inpaint baseline to our attention, where the faster LaMa model generates the initial inpainting results and SDInpaint, which is a diffusion-based inpainting method, serves as post-processing. However, this combination is more computationally intensive than our approach and yields worse results. **Expanding Beyond Object Removal**\ While our main focus in the paper is on object removal, our framework, can be applied to other tasks. Based on the Reviewer's comment, we investigated the capabilities of our framework on reference-based background inpainting as given in the Rebuttal PDF (Fig. 5). By computing the projected embedding of a reference image representing the target background, we inpaint the background of the input image while preserving its foreground. We will revise our submission to include these new results, demonstrating that our method can open directions for image manipulation and general image inpainting problems. Additionally, we would like to emphasize that object removal is a significant task in its own right, and previous research has proposed generating datasets specifically for this purpose. --- Rebuttal 2: Title: thanks for the detailed response Comment: many thanks to the authors for the detailed response. In my view, this is still a limited task that can be dealt with in different ways with varying success. However, I am happy with the answers that the reviewers have provided and have raised score accordingly --- Rebuttal Comment 2.1: Comment: We would like to thank the reviewer for their feedback and for raising their score. We appreciate the reviewer's acknowledgment of our detailed response and understand their perspective on the task's limitations.
Summary: This paper addresses a commonly observed problem when using pre-trained diffusion models for object removal: in standard inpainting setups, these models often add similar objects in place of the ones to be removed instead of extending the background to the masked area. To address this problem, the authors use CLIPAway, a method that leverages region-focused embeddings from Alpha-CLIP, obtaining further disentangled background embeddings that can then be used to guide the diffusion model to perform high-quality object removal without foreground leakage. Strengths: - This paper addresses a very common and important problem in practical applications of large-scale pretrained T2I diffusion models. - While very simple, the proposed method combines existing methods (Alpha-CLIP and T2I-Adapter/T2I diffusion models conditioned on CLIP embeddings) in a novel and very effective manner. The proposed method is demonstrated to work well qualitatively and outperform other approaches when combined with off-the-shelf inpainting methods in a quantiative evaluation. - The paper is written clearly and each part of the method is well-motivated and explained both in text and with qualitative examples. Weaknesses: The main weakness with the paper is the simplicity of the method. Generally, a simple method that achieves the goal is often better than an over-involved complex method, but most of the parts of this method are relatively obvious (translating from one CLIP embedding space to another seems to primarily be an engineering decision as the pre-trained IP-Adapter does not directly consume Alpha-CLIP embeddings, but it would likely be very simple to just train an adapter on these embeddings; and orthogonalization is a very basic operation, even if it is curious that it works so well in this case) and aspects of transferability of this method, which are relevant for its further impact beyond the implementation presented in this paper (e.g., whether it is reliant on the exact two CLIP embedding spaces used, or whether they can be substituted with other CLIP spaces with similar properties), are not covered. Technical Quality: 3 Clarity: 4 Questions for Authors: - Can the projection also be performed in the AlphaCLIP embedding space (or other CLIP embedding spaces) or does it only work in the adapted OpenCLIP embedding space? This would be especially interesting as it would speak to the generalizability of the proposed method, as being limited to specific CLIP embedding spaces might limit practical applications, such as applying this to a standard unCLIP model (e.g., Karlo https://github.com/kakaobrain/karlo, Stable unCLIP https://huggingface.co/docs/diffusers/en/api/pipelines/stable_unclip). - What is the time (wall clock/GPU-hours) required for training the MLP? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations and societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed review. We appreciate the reviewer's feedback regarding the simplicity of our method. We share the same preference for simplicity over complex solutions. While our solution might seem obvious now, previous research attempted to address object removal tasks by generating synthetic datasets. For instance, InstInpaint [30], MagicBrush [33], InstructPix2Pix [2], and concurrently ObjectDrop [26] explored different approaches to create training datasets for this purpose. Our method did not require a dataset generation and achieve better results than competing methods which makes it more elegant. The reviewer asks about the flexibility of CLIP embedding spaces. This is an excellent point. Our method is not confined to the CLIP embedding spaces used in our initial experiments. To demonstrate this flexibility, we trained the MLP for AlphaCLIP with model keys VIT-L/14@336px, and VIT-B/16 in addition to the VIT-L/14 that was presented in the paper. After training, we evaluated each model using the same setup as in Table 1 of the manuscript. The qualitative scores for each model are provided in the table below: | Method | FID | CMMD | CLIP Dist | CLIP@1 | CLIP@3 | CLIP@5 | |-------------------------------------------|------|------|-----------|--------|--------|--------| | SD-Inpaint | 59.21| 0.54 | 0.75 | 70.45 | 57.14 | 49.88 | | SD-Inpaint + CLIPAway (VIT L/14 - presented in the paper) | 57.32| 0.53 | 0.81 | 84.82 | 74.42 | 67.76 | | SD-Inpaint + CLIPAway (VIT L/14@336px) | 54.93| 0.48 | 0.80 | 82.36 | 71.68 | 63.28 | | SD-Inpaint + CLIPAway (ViT-B/16) | 55.31| 0.48 | 0.78 | 83.57 | 72.44 | 63.81 | The results consistently improve the SD-Inpaint model with different CLIP embeddings. The reviewer asks if the projection could be done on the AlphaCLIP space rather than OpenCLIP embedding space. Our projection method is not restricted to the OpenCLIP embedding space. Since AlphaCLIP’s vision transformer is trained with similar objectives as the CLIP vision transformer, their feature spaces are conceptually similar. Therefore, projections can be performed in the AlphaCLIP embedding space or other CLIP embedding spaces with similar properties. To support this, we evaluated the projection method on the AlphaCLIP feature space (projection on AlphaCLIP space followed by MLP) using the same setup as described in Table 1 of the manuscript. The results, shown below, confirm that our projection approach is applicable beyond the OpenCLIP embedding space. | Model | FID | CMMD | CLIP Dist | CLIP@1 | CLIP@3 | CLIP@5 | |------------------|------|------|-----------|--------|--------|--------| | SD-Inpaint | 59.21| 0.54 | 0.75 | 70.45 | 57.14 | 49.88 | | SDInpaint + CLIPAway VIT-L/14 @336px | 54.46| 0.36 | 0.82 | 82.13 | 70.02 | 63.28 | |SDInpaint + CLIPAway VIT-L/14 | 56.15| 0.42 | 0.86 | 85.31 | 74.26 | 68.58 | | SDInpaint + CLIPAway ViT-B/16 | 54.99| 0.41 | 0.84 | 85.08 | 74.79 | 68.35 | We will include these results in the revised paper. We believe they will make the paper more comprehensive. We would like to thank the reviewer for these valuable suggestions. We also provided qualitative example object removal results in the Rebuttal PDF (Fig. 1), further confirming the effectiveness of our projection approach across different CLIP embedding spaces. These results illustrate our method’s flexibility and robustness, supporting its broader applicability. Training the MLP layer for AlphaCLIP with the model key VIT-B/16 using a single Nvidia A40 GPU takes approximately 7 hours which is a negligible GPU time compared to training the diffusion models from scratch. --- Rebuttal Comment 1.1: Comment: Thank you for the extensive response and for running the extensive additional experiments in the short rebuttal timespan! I agree that the method presented is very elegant due to its simplicity. My main concern was that it was not clear from the initial submission whether this simplicity would limit it to the specific model combination presented in the paper. Given the extensive additional results demonstrating that the method works across different AlphaCLIP versions, the projection can be performed in the AlphaCLIP space as well, and the additional results on SDXL demonstrating that the performance is not limited to a specific diffusion model, this concern has been thoroughly addressed. After carefully reviewing the authors' rebuttal and responses to mine and the other reviews, I will raise my score from 6 to 7 and hope for acceptance. I'd appreciate it if the authors incorporated these additional results into the paper in a suitable manner. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for their feedback which helped us to improve the paper. We also would like to thank the reviewer for taking the time to review the additional results and responses, and we appreciate the reviewer raising their score. We will certainly incorporate the additional results into the paper.
Rebuttal 1: Rebuttal: We want to thank all reviewers for their valuable feedback. We have responded to each reviewer's questions in the rebuttal sections, and attached a PDF file with figures for our additional results. Pdf: /pdf/68d4b94941cfc35765580f003f2c0f9819569eff.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion Policies Creating a Trust Region for Offline Reinforcement Learning
Accept (poster)
Summary: The paper proposes a new diffusion-based training loss for offline reinforcement learning, in which the empirical actions from the offline datasets are replaced with the actions sampled from a parameterized policy. The paper calls this new loss “trust region loss” and uses it as one of the objectives to minimize in the diffusion policy learning. The paper points out the mode seeking behavior induced by the minimization of such “trust region loss” and compares it with forward KL-regularization behavior cloning. The paper also evaluates the methods on D4RL datasets with some offline RL methods. Strengths: ### One-step policy Introducing a one-step policy seems a promising way to address the challenges in the diffusion sampling. This idea is nicely motivated and well-justified. The discussion on the mode-seeking behaviors is interesting, especially with the empirical results as an extra evidence to support. Weaknesses: I found the paper shares great similarities to the SRPO paper, in terms of the idea, the formulation, and the presentations. The empirical results of the proposed method are also close to SRPO method. Beyond that, I have several other concerns: ### **Unknown trust region** 1. A bit surprisingly, **the paper gives no explicit definition of the crucial trust region used in Eq. (4)**, the diffusion-based trust region loss, even though trust region is the key for the whole paper. Particularly, the loss function defined by Eq. (4) comes directly from a simple modification of Eq. (3) by replacing the empirical action $a_0$ with the action generated by the policy $\pi_\theta$. It is unclear what the trust region this simple modification induces. 2. Regarding “the loss effectively creates a trust region defined by the diffusion-based behavior cloning policy, within which the one-step policy $\pi_\theta$ can move freely”, the paper does not explain how “to generate actions (data) that reside in the high-density region of the data manifold specified by $\mu_\theta$ through optimizing $\theta$”. Note that $a_\theta$ generated by $\pi_\theta$ will be fed into $\mu_\phi$. **So a trivial solution of this optimization will be $\pi_\theta(\cdot|s) = a_0$, i.e., $\pi_\theta$ just returns $a_0$ for every $(a_0, s)\sim D$ as in Eq. (2). Then loss in Eq. (4) is exactly the same as Eq. (3)**. So how does the optimization of Eq. (4) avoid such trivial solutions? Especially when the paper says Eq. (4) is to encourage mode seeking behaviors. 3. **the “one-step policy”, another important factor in the trust region, is introduced a bit inconsistently section 2.3 and 2.4**. I understand that Eq. (4) is to introduce a policy that seeks to model only one mode, i.e., one action only, from the training samples. But then line 133-137, “the generated action $a_\theta$ appear similar to in-sample actions and penalizes those that differ”, which means the action should be similar to any of the training samples to avoid penalties. Also, this “mode-seeking” contradicts with the statement “explore freely” in line 149: how does this policy move freely? And how the penalization is triggered? In addition, the paper mentioned an implicit parameterization of $\pi_\theta$: $a_\theta=\pi_\theta(s, \epsilon$. For this implicit parameterization, what’s the different between optimization Eq. (4) and the simple behavior cloning? One even more inconsistent description is given in line 183 “one-step Implicit policy”: why $\pi_\theta$ here is instantiated implicitly? ### A bit trivial discussion on mode seeking behaviors **I found the discussion on mode seeking behaviors in Section 3 a bit trivial and pretty incremental as these have been pointed by SRPO already in its section 2.2**. 1. **the notations are not quite clear**: $p_{fake}$ and $p_{real}$ are not defined explicitly with respect to $\pi_\theta$ and $\mu_\phi$. 2. the forward KL divergence is mode-covering in that sense it learns a distribution that tries to cover all modes, while reverse KL divergence is mode-seeking, behaving similarly to the optimization of $L_{TR}$. This discussion has been provided by SRPO paper its equations (2) and (3). So what’s new? More importantly, **since the paper focuses more on the mode seeking behavior, why does it not compare against the reverse KL divergence minimization?** 3. **Theorem 1 tells very little about the mode seeking behavior of $\pi_\theta$ except the simple substitution of mode actions**. I don’t its importance in the training, especially the mode seeking behaviors. ### Hard to understand **There are quite many sentences and descriptions with language issues or typos, which make the whole paper rather hard to understand**. Examples of language issues and typos: * Line 192-193: “KL divergence is employed in this context, it is designed to recover all modalities of the data distribution” * Line 199: “the loss ensures that the generated action .$a_\theta$ lies within the in-sample datasets” * Line 288: “a Jacobian term” is different from what in Eq. (11). * Line 126, “minimize $L_{PB}$ in Equation 4”. There is no such $L_{PB}$ term. * Line 183, “is instantiates as an” **Importantly, the hyperparameters used in the experiments are a bit hard to understand**. Particularly, in the experiments, the paper mentioned that “the variation between datasets significantly impacts the algorithm performance”. Then why did the paper still “employed the official SRPO code on Antemze-v0 and maintained identical hyperparameters used for Antmaze-v2”? Why wouldn’t the paper just report the performance on both versions, separating DQL, IDQL from SRPO? Technical Quality: 1 Clarity: 1 Questions for Authors: 1. Can the authors explain which empirical results support that the assertion “diversity, while valued in image and 3D generation, is not essential in offline RL”? as speculated in Line 234 - 235. 2. Can the authors explain “discourage out-of-distribution sampling” in line 107? 3. What does it mean by “policy $\pi_\theta$ can move freely” in line 121? Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We believe there may have been some misunderstandings regarding our paper. We will strive to address your questions thoroughly and hope you will consider re-evaluating our work. ### Similarities Firstly, we have discussed some differences between our method and SRPO in the **Global Response**. Here we are addressing the reviewer's concerns about "presentation" and "empirical result" similarity. **The presentation.** We believe every excellent work, like DQL, SRPO, and IDQL, shares a similar logic in presenting the story in the offline RL setting. The presentation of our work humbly learns from these papers. Thus our presentation flow is not only similar to SRPO but also to DQL and IDQL, as these papers serve as excellent examples for us. **For empirical results.** Our method slightly outperforms SRPO on Gym tasks, achieving different SOTA results on various sub-tasks. For instance, SRPO scores higher on walker2d-m-e (114 vs. 110) but lower on hopper-m-e (100 vs. 109). Additionally, our method surpasses SRPO on antmaze-v0 tasks and excels on some antmaze-v2 sub-tasks like antmaze-l-p and antmaze-m-d (**Table 6**). ### Unknown trust region **1. Definition.** Formally, by Theorem 1, given any state $s$, we can define a trust region for action by the set {$a | \log p(a|s) \ge \text{threshold}$}, where $\log p(a|s)$ is approximated by the diffusion loss (Eq 4). The magnitude of the threshold can be tuned by the hyperparameter $\alpha$ during the optimization of the final loss (Eq. 5). Thank you for pointing this out; we will make this point clearer in the revision. **2. Trivial solution.** For the reviewer's concerns about "move freely" and "how to generate actions...", we have tried to answer them in **Global Response**. For "trivial solution", such a trivial solution is only possible when dealing with discrete space control questions, and the offline datasets contain information for whole state spaces. However, for continuous control questions, such a trivial solution has a probability measure of 0 since both action and state spaces are continuous, and the size of the offline dataset is finite. Thus, this trivial solution does not apply to our setting. Moreover, even in the discrete case, the trivial solution you mentioned is essentially behavior cloning. When adding the Q function for guidance to improve the policy, $\pi$ will only return the action $a$ with the highest Q value for any given state $s$. **3. Inconsistently.** We addressed the reviewer's concerns about "inconsistent" presentation in our global response section. Now, we aim to clarify "why $\pi_\theta$ is instantiated implicitly" in our approach. The reason for the implicit instantiation of $\pi_\theta$ in line 183 is due to the KL loss (Eq 8), which encourages mode seeking (Figure 2). The Gaussian policy has limited expressiveness; therefore, we make $\pi_\theta$ implicit to enhance its expressiveness and ability to seek more modes. It is important to note that Eq 8 is never used in our method; it is included only as a comparison baseline. ### Mode seeking behaviors **1. Trivial.** The discussion in Section 3 is primarily to demonstrate that our method indeed exhibits mode-seeking behaviors. While SRPO discusses mode-seeking for its own method, our focus here is to discuss the mode-seeking behavior specific to our approach. Therefore, we believe this discussion is necessary and not trivial. **2. Notations.** Constrained by the length of the paper, we do not demonstrate all details about the algorithm in Section 3 since it is not our main contribution. However, we did mention in line 191 that more details are deferred to Appendix D. For full details, please refer to Variational Score Distillation [33], Diff-Instruct [23], and Distribution Matching Distillation [37]. These reference indices are consistent with those used in our paper. **3. Reverse KL divergence minimization.** This was addressed by SRPO, and since our paper thoroughly compares with SRPO, we believe we have sufficiently addressed the comparison with reverse KL divergence. **4. Theorem 1.** Please refer to our global response. ### Hard to understand **1. Typos.** Thank you for the careful review and valuable suggestions. We will incorporate these corrections in the revised manuscript. **2. Experiments.** We have contacted the authors of SRPO by email, and they acknowledged that there is a version mismatch between SRPO and other baselines in Antmaze v0 and v2. Since SRPO does not provide hyperparameters for Antmaze v0, we used the same hyperparameters as for v2 and reported them in the main table. Additionally, as illustrated in line 262 and in **Table 6**, we also compare our method with SRPO on **Antmaze v2**, where we use same hyperparameters as those in Antmaze v0 for our method. The SRPO results are taken from the original paper for a fair comparison. And separating table is a good idea. We can separate SRPO from the main table and create an additional table to compare results in the revision, which will only include Antmaze-v2 results if the reviewer believes it is better to present in this way. ### Responses to Questions For Questions 2 and 3, we believe they have been covered in the global response. For Question 1, in offline RL, typically after training and during interaction with the environment, for any given state, we want the agent to generate the action with the highest Q value. Therefore, diversity, in the sense of generating a batch of diverse actions for a given state, is not essential. As long as the action has a high Q value, the diversity of the generated actions does not contribute to increasing the final cumulative reward. A good example is the deterministic policy, which, for a given state, generates only one action. Although it lacks diversity, it can still achieve a high Q value. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for providing the rebuttal response. I still have concerns and some of my questions are also not answered directly. I do think the paper in its current version needs some substantial improvement. I would thus keep my original assessment. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We have made every effort to address your questions and believe our rebuttal has thoroughly covered all of your concerns. We have elaborated on the differences in motivation, provided detailed explanations of the theorem proof, clarified the theorem explanation, and presented comprehensive empirical results. Each of your questions has been answered point by point. We believe we have effectively addressed your concerns, as we did with Reviewer R2od and Reviewer Rh1M, both of whom were satisfied with our responses and voted for acceptance. However, since you still have some concerns and mentioned that some questions were not directly addressed, could you please clarify which points remain unanswered or which aspects are still of concern? We would be happy to engage in further discussion on these matters.
Summary: This paper introduces Diffusion Trusted Q-Learning (DTQL) for offline reinforcement learning. DTQL employs a dual-policy representation, a diffusion policy trained by behaviour cloning, and a one-step Gaussian policy trained by RL and distilling the diffusion model. Specifically, DTQL introduces a diffusion trust region loss for the distillation. Such a scheme allows both one-step generation, and achieves strong performance on the D4RL benchmark. Strengths: 1/ The proposed method is interesting. 2/ The proposed DTQL is well-motivated. It addresses one of the big issues of diffusion policy, i.e., the multi-step inference, through policy distillation. 3/ The paper has conducted solid experiments, for both intuitive understanding and empirical results on benchmarks. Weaknesses: Overall I don’t see many weak points of this work. 1/ The distilled policy captures some certain modes, but is actually not multi-modal. It also missed certain modes as in Fig. 2, which is inconsistent with what the authors have claimed in the caption. I would suggest the authors to be careful and avoid over-claiming. 2/ The writing has certain room for improvement. The introduction could be reorganised for better clarity. There are some typos, e.g., in Line 183, “instantiates”, and inconsistent wording, e.g., “cooperative policy” and “dual policy”. Technical Quality: 3 Clarity: 2 Questions for Authors: 1/ It seems to me from Fig. 2 that this approach encourages interpolation of the distribution. Have you tried to design tasks to validate such behaviours? 2/ It is a bit unclear how to guarantee the distilled Gaussian policy captures all relevant information of the diffusion process, as its policy class naturally limits the expressiveness. Could you provide more explanations for this? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The experiments are done only on simple D4RL benchmarks. The policy distribution is quite simple and tasks are relatively short horizon. As a result, I’m not fully convinced when applied to more challenging scenarios, distilling the diffusion policy into a Gaussian with trust-region loss can still work given more complex action distributions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our method's novelty, motivation, and empirical performance. Thank you for the careful review. ### Responses to Weakness 1. Thank you for the good suggestion. You are absolutely right that some modes are also missed by $\mathcal{L}_{\text{KL}}$. We will modify the caption accordingly in the revision to accurately reflect this. ### Responses to Weakness 2. Thank you for pointing this out and for your valuable suggestions. We will continue polishing our paper to enhance clarity. We apologize for the oversight in Line 183 with the word "instantiates." We will correct this and thoroughly review the entire paper to eliminate any other typographical errors. Additionally, we will correct the inconsistent wording and change "cooperative policy" to "dual policy" to provide clearer communication to the readers. ### Responses to Question 1. Thank you for the insightful observation. We hypothesize that this behavior is due to neural networks' tendency to learn continuous functions, which can lead to the connection of some modes with artificial dots. This phenomenon also occurs when using diffusion models to mimic distributions with isolated density modes. For example, in Figure 2 of DQL [1], there are similar artifacts in the interpolation of modes when we use pure diffusion policy to do behaviour cloning. Yes, we have also designed an RL algorithm based on $\mathcal{L}_{\text{KL}}$, as shown in Figure 3 and Appendix G.2. We suspect that interpolation may cause some out-of-distribution action generation, which could explain the performance drop observed in the learning curve in Figure 8. [1] Wang, Z., Hunt, J. J., \& Zhou, M. (2022). Diffusion policies as an expressive policy class for offline reinforcement learning. arXiv preprint arXiv:2208.06193. ### Responses to Question 2. Thank you for the question. We are happy to make some clarification on this point. 1. We don't expect the Gaussian policy to capture all the information from the diffusion model due to the unimodal nature of Gaussian policies. However, the Gaussian policy is sufficient to capture the optimal mode, which has the highest Q value and is usually unimodal. For example, in Figure 2 of DQL [1], even though the diffusion policy initially captures all behavior modes, it eventually focuses only on one optimal mode after policy improvement (guided by Q value). Thus, we believe that in an offline RL setting, the Gaussian policy is able to capture one optimal behavior mode when constrained by the diffusion policy. 2. We chose the Gaussian policy because it is widely used in RL settings, such as in IQL [2] and SAC [3], making it a good starting point to demonstrate the performance of our algorithm. However, our algorithm can accommodate any one-step policy. One possible extension is to use an implicit policy parameterized by a neural network to enhance the expressiveness of the policy and cover multimodal distributions if necessary. [2] Kostrikov, I., Nair, A., \& Levine, S. (2021). Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169. [3] Haarnoja, T., Zhou, A., Abbeel, P., \& Levine, S. (2018, July). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning (pp. 1861-1870). PMLR. ### Responses to Limitations. Thanks for the discussion. We evaluate our algorithm using widely used benchmarks for fair comparison with other offline RL algorithms. We acknowledge that the Gaussian policy may not have sufficient expressiveness for complex tasks. However, our algorithm framework can accommodate any one-step policy. Therefore, any policy, such as an implicit policy defined by passing random noise through a deep neural network, can potentially be used in our algorithm to handle more complex tasks. This represents an interesting future direction to explore how these diffusion-based RL algorithms perform in more complex settings. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reponse. My concerns are addressed. I would keep my original rating and vote for acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their constructive suggestions and positive feedback. We will implement the modifications as discussed, and we appreciate the reviewer’s input, which will help us further enhance our paper.
Summary: The paper introduces Diffusion Trusted Q-Learning (DTQL), a novel approach for offline reinforcement learning that enhances performance and efficiency. DTQL utilizes a dual policy framework to bridge the gap between diffusion policies and one-step policies, improving expressiveness without the need for iterative denoising sampling. By incorporating a diffusion trust region loss, DTQL ensures stability and effectiveness in training. The method outperforms traditional distillation methods in various scenarios, offering a promising solution for offline RL challenges. Strengths: * To my knowledge this paper presents a new approach to make Diffusion policy computationally efficient compared with previous method. * Based on the experimental results, it is evident that DTQL outperforms the previous method in terms of performance. Weaknesses: * This is an incremental work that relies on Diffusion Q-learning (DQL) and adds a weighted loss to the diffusion model objectives(Equation 4) to accelerate model inference, without clearly explaining the necessity of this combination. * The explanation of the proposed method in Section 2 is too brief, with details only provided in subsection 2.3. Additionally, subsection 2.3 does not clearly explain the meaning of Trust Region in Equation 4. The subsequent Theorem and Remark do not effectively justify the necessity of using this update formula. The overall logic lacks coherence. * The introduction of other methods for accelerating training and inference in diffusion-based policy learning in Section 3 is overly lengthy and lacks relevance to the methods used in this paper. It is recommended to move most of this content to the appendix. Technical Quality: 2 Clarity: 2 Questions for Authors: * How should the term "trust region" in the article's title and method name be understood? * How is the weight function $w(t)$ in Equation 4 set in this paper, and why is it configured in this manner? A detailed discussion is needed. * How does DTQL eliminate the need for iterative denoising sampling during both training and inference, making it computationally efficient? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: * The writing of this paper has issues—there is a lack of coherence between the motivation, method, and experiments. Section 2 provides insufficient detail on the proposed method and does not effectively explain why it improves computational efficiency. Additionally, Section 3 dedicates too much space to the comparison experiments replacing $\mathcal{L}_{\mathcal{TR}}$, lacking a comparative analysis with the baseline method DQL. * The method presented in this paper is merely a simple combination of previous works [1] and [2], which is lack of novelty. [1] Wang, Zhendong, Jonathan J. Hunt, and Mingyuan Zhou. "Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning." The Eleventh International Conference on Learning Representations. [2 ] Kingma, Diederik, and Ruiqi Gao. "Understanding diffusion objectives as the elbo with simple data augmentation." Advances in Neural Information Processing Systems 36 (2024). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate the time and effort you have dedicated to evaluating our work. It seems there may have been some misunderstandings regarding the main points of our paper. We will do our best to clarify these issues and hope you will consider re-evaluating our work. ### Responses to Weakness 1,2,3 We try to address the reviewer's concerns in the **Global Response**, where we introduce the main goal, contribution, and presentation logic of our paper. We believe this global response adequately addresses the reviewer's concerns regarding the weaknesses. If the reviewer still has any questions about these aspects, we are happy to discuss them further. ### Response to Question 1 **1.From Toy Example:** Let us illustrate this with toy examples. In Figure 2, the first column shows the offline action data distribution, and the second column shows the trust region loss for different actions. We can observe that actions lying in the high-density region of the raw data manifold have less trust region loss. This means that actions similar to in-sample data are trusted and have lower loss. In contrast, actions outside the in-sample data manifold, as shown in the second and third columns, have large trust region loss. This visualization demonstrates that for a given fixed diffusion model $\mu_\phi$, the loss $E[||\mu_\phi(a_\theta+\sigma_t\epsilon,t|s)-a_\theta||^2]$ can be used to check whether the action $a_\theta$ generated by the one-step policy $\pi_\theta$ is similar to or different from in-sample actions. If the generated data is similar to in-sample data, it is trusted and has a lower trust region loss. **2.From Theoretical Perspective:** From Theorem 1, the trust region can be defined by the conditional log likelihood. Specifically, for any given state $s$, a trust region of action is a set {$a|\log p(a|s)\geq\text{threshold}$} where the conditional log likelihood is approximated by conditional diffusion loss (Eq 4). The magnitude of the threshold can be tuned by the hyperparameter $\alpha$ during the optimization of the final loss (Eq. 5). **3.Further Discussion:** The discussion about the "trust region" is also covered from Line 117 to Line 122. If the reviewer has any further questions about understanding this loss, we are happy to discuss it in more detail. ### Response to Question 2 As our goal is not to design a new weight schedule for the diffusion model, we are using the state-of-the-art diffusion weight schedule, EDM [3], which is discussed in Section 4.3 and Appendix C for completeness. However, other diffusion training schedules, such as VP and VE [4], can also be accommodated within our algorithm framework. [3] Karras, T., Aittala, M., Aila, T., \& Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577. [4] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., \& Poole, B. (2020). Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456. ### Response to Question 3 In brief, we do not use a diffusion policy to generate actions, thereby avoiding iterative denoising. A comprehensive explanation of this point is discussed in the **global response**. If the reviewer has any further questions about this point, we are happy to discuss them. ### Response to Limitation 1 We are confident that by considering the **global response** and gaining a more comprehensive understanding of our paper, the logic and presentation will become clearer and more accessible for readers. We kindly request that the reviewer re-evaluate our paper in light of these clarifications. ### Response to Limitation 2 As discussed in the **global response**, our paper is not the same as DQL [1]. On the contrary, we introduce a completely different training scheme to address the drawbacks in DQL. Additionally, [2] is used to support Theorem 1, and we believe that our use of this reference does not diminish the novelty of our new offline RL method design in any way. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the detailed rebuttal. Most of by concerns has been addressed. I would like to raise the score. However, the organization of this paper can be improved. For the current form, the reviewer found it not easy to follow. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful response. We are pleased to hear that most of your concerns have been well addressed. We sincerely appreciate your suggestion regarding the writing, and we will carefully consider making adjustments to the structure of the presentation to further promote understanding. Should you have any other recommendations in presentation, please let us know and we are happy to involve that. If there are no further technical questions and given our significant improvement in both efficiency and performance, we would respectfully ask if you might consider re-evaluating our paper with a view towards a more positive recommendation for acceptance.
Summary: This paper introduces DTQL, a novel offline reinforcement learning (offline RL) method. With a newly introduced diffusion trust region loss, DTQL constrained policy update within a predefined trust region near a diffusion policy trained by behavior cloning (BC). Through the empirical experiments, DTQL demonstrated improved performance against BC and offline RL baselines, including recent baselines that utilize diffusion models in offline RL settings. Strengths: 1. The diffusion-based trust region loss looks novel and interesting. 2. The visualization of trust region loss and the toy-tasks provide an intuitive explanation of the proposed algorithm. 3. The empirical experiments show improved performance compared to standard offline RL methods and faster inference time than most offline RL with diffusions. The reviewer appreciates that the author also shows the results where DTQL performs worse than IDQL (among the Antmaze tasks), which provides a more comprehensive view of the algorithm's performance. 4. The paper is easy to follow, and the connections and differences to related works are clearly outlined. Weaknesses: 1. The performance improvements compared with DQL, IDQL, and SRPO are not very significant; in some experiments (Antmaze), IDQL outperformed the proposed algorithms. 2. The ablation studies only show one seed result, which reduces the results' plausibility, especially for Figure 5 (a), where the learning curves are quite noisy. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The reverse KL objective should also be mode-seeking instead of mode-averaging. I do not quite understand why the KL regularization does not show this property in the example provided in Figure 3. As in the SRPO[1], a similar illustration was shown in Figure 5. Could the author provide more insights on what caused this difference in behavior? 2. When DTQL only utilizes four layers of MLP and SRPO requires several resnet blocks, why is SRPO inference faster than DTQL? [1] Chen H, Lu C, Wang Z, Su H, Zhu J. Score regularized policy optimization through diffusion behavior. arXiv preprint arXiv:2310.07297. 2023 Oct 11. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations were discussed in the conclusion sections, along with possible future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of novelty, presentation and performance improvement of our paper. Below, we provide detailed clarifications and answers to solve the remaining concerns. ### Responses to Weakness 1. We first want to emphasize that one major contribution of DTQL is for speeding up Diffusion-QL. We proposed a dual policy training approach and achieved dramatically improvement in terms of both training and inference efficiency. For D4RL benchmark performance, we provide more clarifications below. 1. Compared with DQL, our major baseline, our method demonstrates dramtically speeding up, superior average score and greater stability. We expand the explanation for stability here. As shown in Figure 8 of [2], the training curve of DQL is less stable due to the challenges in computing gradients of the Q-value function while backpropagating through all diffusion timesteps, which may result in a vanishing gradient problem. In contrast, we propose a dual policy trainig approach, and train a one-step Gaussian policy with trust region defined by a diffusion policy. This avoids the gradient vanishing issue and mitigates training instability, as shown in the learning curve in Appendix H. 2. DTQL outperforms IDQL in Gym and slightly underperforms it in Antmaze. However, IDQL is **10x** slower than us in inference. 3. Regarding SRPO, our method not only scores higher on Gym tasks but also surpasses SRPO on Antmaze-v0 as shown in Table 1 (from 30.1 to 73.6) and Antmaze-v2 as shown in Appendix G.3. [2] Lu, C., Chen, H., Chen, J., Su, H., Li, C., \& Zhu, J. (2023, July). Contrastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning. In International Conference on Machine Learning (pp. 22825-22855). PMLR. ### Responses to Weakness 2. Thank you for your suggestion. We have updated the ablation study to include results across 5 random seeds in the PDF of global response, along with the standard error. We would like to make the following clarifications: 1. The blue line in Figure 5 (a), only used for ablation study, is quite noisy because it does not include the entropy term. After adding the entropy term, the orange line for the reward curve of the proposed algorithm becomes much more stable. 2. We acknowledge that the orange line in the antmaze-large-diverse-v0 appears noisy. However, this level of variations is common in the offline RL setting and is, in fact, found to be more stable than DQL. ### Responses to Question 1. Thanks for giving us opportunity to explain about this, and we will also make more discussion about it in the revision. In sum, KL regularization in Figure 3 is not the same as reverse KL used in SRPO. In Figure 3, we are using the loss from Diff-Instruct [3], Variational Score Distillation [4] and Distribution Score Matching [5]. The detailed algorithm is described in Appendix D. An analysis of the gradients shows distinct differences between the Score Distillation Sampling (SDS) loss in SRPO [1] and our loss: - As discussed beginning at line 220, for SRPO [1], we have: $$ \nabla_\theta L_{\text{SDS}} =E_{t,s,\epsilon}\left[w(t)(\epsilon_\phi(z_t,t|s) - \epsilon)\frac{\partial z_t}{\partial \theta}\right] $$ - In contrast, the loss we used in Figure 3 is: $$ \nabla_\theta L_{\text{KL}} = E_{t,s,\epsilon}\left[w(t)(\epsilon_\phi(z_t,t|s) - \epsilon_{\text{fake}}(z_t,t|s))\frac{\partial z_t}{\partial \theta}\right] $$ where $\epsilon_{\text{fake}}$ is the score function we learned from the current policy. Thus, the loss we used in Figure 3 provides an updating direction that can be rather “fine” and “sharp” due to the difference between pretrained $\epsilon_\phi$ and $\epsilon_{\text{fake}}$. This approach is encouraging for covering more modes (as shown in Figure 2) and differs from the reverse KL loss (SDS loss) in SRPO. A more detailed comparison has been discussed in [4]. [3] Luo, W., Hu, T., Zhang, S., Sun, J., Li, Z., \& Zhang, Z. (2024). Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models. Advances in Neural Information Processing Systems, 36. [4] Wang, Z., Lu, C., Wang, Y., Bao, F., Li, C., Su, H., \& Zhu, J. (2024). Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 36. [5] Yin, T., Gharbi, M., Zhang, R., Shechtman, E., Durand, F., Freeman, W. T., \& Park, T. (2024). One-step diffusion with distribution matching distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6613-6623). ### Responses to Question 2. Thank you for your observation. We will clarify this point in the revision. 1. SRPO uses a deterministic policy, whereas our method employs a stochastic policy. Consequently, after the network forward pass, we need to resample to generate an action, which adds to the inference time. 2. Additionally, we utilize the (stochastic) max Q trick in inference, as in DQL, which involves generating $N$ candidate actions for a given state and then selecting an action randomly with a weight proportional to $\exp{Q(s,a)}$. Therefore, these two factors—stochastic resampling and max Q —contribute to making our inference slightly slower than the deterministic policy used in SRPO. We will provide a more detailed explanation of this point in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response and extra experiments, and I have no more questions at this stage. I suggest the authors include the SRPO style reverse KL in Figure 3 in the final version of this work to provide a more intuitive visual understanding of the differences between them. I will keep my score and lean towards a positive outcome of this work. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s constructive and valuable suggestions. We will incorporate an SRPO-style example into Figure 3 as recommended in the revised manuscript.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback. We believe some aspects of our paper may have been misunderstood, particularly by **Reviewer Px6z regarding the goal, contribution, and logic of our paper**, and by **Reviewer BozX who questioned the differences with SRPO**. Below, we focus on addressing these two concerns. ## Goal, Contribution, and Logic The goal of our paper is to accelerate training and inference for diffusion-policy-based offline RL methods, such as DQL, while also increasing training stability. Previous baselines, like DQL, generate actions through iterative denoising during both training and inference phases. This process cannot be computed in parallel, making training and inference time-consuming. Additionally, since actions are generated by iterative denoising, gradient computation requires backpropagation through all diffusion time steps. This can result in a vanishing gradient problem, undermining training stability. Our paper aims to address these issues with two main objectives: 1. Leverage diffusion policy and enhance performance. 2. Avoid iterative denoising for action generation, thus accelerating training and inference and enhancing stability. To this end, we propose a dual policy method, containing one diffusion policy and one one-step Gaussian policy. The diffusion policy is not used to generate actions directly, while the Gaussian policy generates actions for both training and inference. **This one-step policy eliminates the need for iterative denoising, thus speeding up training and inference.** Additionally, since actions are not generated through iterative denoising, the vanishing gradient issue is mitigated, resulting in more stable training. We introduce a Trust Region loss (Eq. 4) to bridge the two policies. After performing pure behavior cloning, using actions from the dataset to train diffusion policy $\mu_\phi$, we generate actions from the one-step policy. We then reuse the diffusion loss calculated by $\mu_\phi$ to evaluate how much the generated actions (from the one-step policy) deviate from the existing dataset. Figure 1 demonstrates that if the generated action is similar to in-sample actions, the Trust Region loss is small; if the generated action deviates significantly from in-sample actions, the Trust Region loss is large. Thus, the Trust Region loss uses pretrained $\mu_\phi$ to measure whether the actions generated by the one-step policy $\pi$ are similar to or different from the in-sample data. Then adding Q function, we derive the final loss in Eq. 5. Since the one-step policy $\pi$ is the actual policy interacting with the environment, and we only use $\mu_\phi$ to calculate the diffusion loss without reverse sampling, training and inference are accelerated. The first component in Eq. 5 regularizes the generated action to not deviate far from the dataset, and the second component maximizes the Q value. ## Difference with SRPO Some concerns about our method is similar to SRPO. While we aim to solve the same problem of diffusion policy as introduced above, our approach differs significantly from SRPO in terms of idea, loss formulation, loss explanation, theorem, and empirical results, as elaborated below: ### 1. Idea and loss formulation difference We believe the intuition behind our loss $L_{\text{TR}}$ (Eq. 4) fundamentally differs from the reverse KL-based SRPO loss. The starting point and motivation of our paper is to reuse the diffusion loss as a regularization, which helps us determine whether the generated action is far from the in-sample dataset. Here, the diffusion policy acts more like a **detector** to detect out-of-distribution actions. In contrast, the motivation of SRPO is to use reverse KL to distill information from diffusion policies, where the diffusion policy serves more as a **distillation target** rather than a detector. The loss formulations are also evidently different. Let $z_t = \alpha_t a_\theta + \sigma_t \epsilon$ where $a_\theta \sim \pi_\theta$. The formulations of the losses for SRPO and our method are clearly different.: $$ L_{\text{SRPO}} = E_{s,\epsilon,a_\theta\sim\pi_\theta}[\log\pi_\theta(a_\theta|s) - \log \mu_\phi(a_\theta|s)] $$ $$ L_{\text{TR}} = E_{s,\epsilon,a_\theta\sim\pi_\theta}[||\mu_\phi(z_t|s) - a_\theta||_2^2] $$ ### 2. Theorem explanation difference In addition to Theorem 1, which demonstrates that our loss encourages the generated action $a_\theta$ to lie in the high-density region of the offline data, we would like to further clarify the differences: - During the training of the diffusion model, we aim to optimize $\phi$ to minimize the negative log likelihood $E_{a\sim D}[-\log\mu_\phi(a|s)]$. However, we use a variational upper bound, defined as the diffusion training loss in Eq. 2, for tractable optimization (Theorem 1 and [1]). During the optimization of policy $\pi_\theta$, we adapt this diffusion training loss from Eq. 4 as our trusted region loss $L_{\text{TR}}$, which acts as a variational upper bound for the negative log likelihood $E_{a\sim \pi_{\theta}}[-\log\mu_\phi(a_{\theta}|s)]$. Therefore, there is no inconsistency between the diffusion training and diffusion-based regularization in our method. - We further observe that SRPO incorporates $E_{a\sim \pi_{\theta}}[-\log\mu_\phi(a_{\theta}|s)]$ as the second term in its KL-based theoretical loss. The other component of the SRPO KL loss is $E_{a\sim \pi_{\theta}}[\log\pi_\theta(a_\theta|s)]$, which is notably absent in our loss formulation. The theoretical loss of SRPO is intractable to optimize, necessitating an approximation that substitutes $a_\theta$ with $z_t$. This substitution further differentiates the SRPO loss from our DTQL loss. [1] Song et al. "Maximum likelihood training of score-based diffusion models." NeurIPS 2021. ### 3. Empirical performance difference Due to character limitations, the discussion will be provided in our response to the specific reviewer. Pdf: /pdf/94ff6835dc5db9f15c43dedf1275121e205f4ca9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sample-Efficient Private Learning of Mixtures of Gaussians
Accept (spotlight)
Summary: The paper studies the important problem of private learning of the Gaussian mixture model to estimate the underlying distribution within a desired total variation distance. By combining different techniques, the authors succeed in deriving bounds that are of quadratic dimension, thus significantly improving the existing results. Strengths: The paper contains several new results that significantly improve the state of the art. These include i) Theorem 1.3, which establishes a bound with quadratic complexity for any dimension, ii) Theorem 1.4, which proposes an improved bound for $d=1$, showing that the sample complexity can be linear in the number of components, iii) Theorem 1.5, which proposes a lower bound on the sample complexity. The latter, combined with Theorem 1.4, shows the optimality of the established bounds for the univariate Gaussian distributions. In addition, the paper is generally well and smoothly written. Weaknesses: - The paper could benefit from some numerical verifications of the results/algorithms used. - The appendix section is poorly organized. Without an outline and proof, it is difficult to follow such a long appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It would be better to explain more the difference between density estimation and parameter estimation. 2. How do the bounds of Theorems 1.3, 1.4., and 1.5 behave with respect to the failure probability $\beta$ order-wise? 3. Can the authors discuss the assumption $R=n^{100}$ (line 160) a bit more? Why does it depend on $n$ (and not $k$ or $d$?)? Also, is the polynomial dependence restrictive? 4. The first sentence of the informal statement of Theorem 2.1. does not seem rigorous enough. Is such a $\tilde{\Sigma}$ unique? I don't think so. Then, if not, which choice of $\tilde{\Sigma}$ is taken into account when computing $V_{\eta}(\mathbf{X})$? Probably maximum volume? 5. The appendix sections are very difficult to follow. There should be a detailed outline at the beginning to guide the reader as to what each appendix deals with. Also, there should be sufficient references in the main text. For example, after the informal statement of Theorem 2.1., it should be clearly stated where the full statement and proof can be found. Similarly, after each paragraph (e.g., Section 2.1) or after each mentioned previously established result (e.g., advanced composition theorem), there should be an exact pointer to where the complete version can be found in the appendices. 6. Since the submission was allowed for 9 pages, the authors had 1.5 more pages. I think some material from the appendices, for example the algorithms used to privately learn GMM, could be moved to the main text. 7. Finally, what is missing the most is the experimental section. If I'm not mistaken, this should be easy to do. Although not all theoretical work requires experimental verification, I believe that if the experiments verify the theoretical finding, this would greatly increase the importance of the results. Minor comments: - Line 52: it is better to add "The parameters $\alpha$ and $\beta$...". - The constant $c^*$ in Theorem 1.5. does not appear in the bound. - In the first paragraph of Section 2.1, it is mentioned that "... as we can finish the procedure with hypothesis selection, as discussed above". However, hypothesis selection has not been sufficiently discussed. - Is it true that the intuition given in Section 2.1. holds if $ n \gg k^2$? - Typo line 178 (a a) - Is $k$ in lines 203-204 the number of components in the Gaussian mixture? Why does this statement ("if we altered $k$ data points...") only hold for $k$ changes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. In the following we answer the raised questions. 1. To clarify this further, the goal of density estimation is to learn the overall distribution’s PDF (up to low total variation distance), whereas in parameter estimation we want to learn every Gaussian component’s mean and covariance. One can construct two different Gaussian mixtures with very similar PDFs, but with somewhat different components (in which case identifying the wrong Gaussian mixture is OK for density estimation but not for parameter estimation). We will add this discussion to the paper. 2. For the sake of clarity of the presentation we tried to avoid writing down the exact dependency on beta in our bounds. However, note that the sample complexity scales at most logarithmically with 1/beta. The reason is that any method that achieves a failure probability of < 0.50 can be boosted to have failure probability of beta with a mild (logarithmically in 1/beta) increase in the sample complexity. This can be done by running the estimator on log(1/beta) data sets, and then simply running private hypothesis selection on the outcomes. We will add this discussion to the paper. 3. The choice of $n^{100}$ is just for convenience, and the effect of the constant 100 on the sample complexity is negligible (i.e., the sample complexity does not change in terms of the $\tilde{O}(.)$ notation). In Line 157, we mention that the sample complexity (after the crude approximation is obtained) depends on $\log R$, so even if $R = n^{100}$ this only blows up the sample complexity by $\log(n^{100}) = O(\log n)$. 4. Indeed, the $\tilde{\Sigma}$ is not unique, and the volume in Theorem 2.1 refers to the volume (i.e., Lebesgue measure) of the set of all possible $\tilde{\Sigma}$, for a fixed choice of dataset $X$. As an example, if $d = 1$ (in which case $\tilde{\Sigma}$ is just a variance) and every $\tilde{\Sigma}$ between $1$ and $4$ satisfies the score function, then the volume is 3. In general, we use higher dimensional Lebesgue measure (see Appendix C.1) to formally define the volume. 5 and 6. We thank the reviewer for their suggestions about improving the presentation of the paper and the appendices. We will use the remaining space to give more details in the main paper (and will also improve the structure of the appendix to make it easier to navigate) 7. We would like to emphasize that our work focuses on the fundamental problem of determining the number of samples for learning a GMM privately. However, our algorithm is not computationally efficient. Note that even without privacy, it is not known whether GMMs can be learned efficiently in high dimensions. This remains an intriguing open problem in the field of computational statistics. Minor comments: Regarding Theorem 1.5, we note that $c^*$ will be some universal constant, so it can be hidden in the $\Omega$ notation. The intuition in Section 2.1 holds as long as $n \gg k$, so it also holds if $n \gg k^2$. In lines 203-204, the use of $k$ is a mistake - we will use $t$ (or another variable) to avoid confusing it with the number of components $k$. For all other comments, we agree with you and we will incorporate your feedback. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. I believe the paper merits the publication, and I maintain my initial rating.
Summary: This paper investigates the sample complexity of privately learning mixtures of Gaussians. The authors achieve a sample complexity of approximately $O(kd^2+k^{1.5}d^{1.75}+k^2d)\log R$ where $R$ is an upper bound on the condition number of the covariance matrix and the norm of the mean. This result improves upon the previous best bound of $O(k^2d^4)$. Additionally, they constructed a lower bound of $\Omega(kd^2+k\log(1/\delta)/\epsilon)$, which refutes a conjecture from prior work, and they achieve optimal sample complexity when $d=1$. Strengths: The improvement in sample complexity is significant. The paper provides a thorough technical overview of the upper bound, detailing the incorporation of sample compression and the methods used to enhance dimension independence within the robustness-to-privacy conversion technique. Weaknesses: The paper is technically dense, making it challenging to understand and verify all the details. The authors do not fully utilize the page limit, which could have been used to explain the techniques more clearly. The technique for establishing the lower bound is not discussed in the main body of the paper. Including formal definitions and crucial lemmas in the main text could help readers better understand the key ideas. For instance, the definition of volume was unclear until I consulted the appendix. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Where does the $k^2$ term come from in the upper bound? 2. Are there alternative methods to address this problem that do not rely on the robustness-to-privacy conversion technique? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. We would like to emphasize that our bound does not depend on the condition number of the covariance matrix nor on the magnitude of the mean, i.e., our sample complexity has no dependence on $\log R$. (Otherwise, proving an upper bound would have been easy, e.g., by using private hypothesis selection on a finite cover for the set of possible mixtures.) We agree with the comments about the presentation of the paper, and will improve it in the next version (including adding more details about the definitions,lemmas, or proof sketches in the main paper). In the following we answer the raised questions. 1. To explain the $\frac{k^2 d}{\alpha}$ term, the point is that for the private algorithm to work, we need two things. First, the private algorithm finds a covariance matrix (or more precisely, a mean-covariance pair $(\tilde{\mu}, \tilde{\Sigma})$) with low score (see Line 932 and the above few lines for the definition). Second, if a mean-covariance pair has a low score, then it is actually similar to a true mixture component $(\mu_i, \Sigma_i)$. The term $\frac{d^{1.75} k^{1.5} \sqrt{\log (1/\delta)}}{\alpha \epsilon}$ is needed for the first part. However, the $\frac{k^2 d}{\alpha}$ term is needed for the second part: that any mean-covariance pair of low score is similar to some true mixture component (this is the goal of Proposition E.4 in our paper). The reason for the $k^2$ dependence in this second term is nontrivial so here’s a high-level intuition. Given a dataset $X$ and mean-covariance pair $(\tilde{\mu}, \tilde{\Sigma})$, the score $S((\tilde{\mu}, \tilde{\Sigma}), X)$ is small if there exists roughly $\alpha/k$ fraction of data points that “look like” they came from $\mathcal{N}(\tilde{\mu}, \tilde{\Sigma})$. The point is that one can have one point come from each of $k$ different mixture components which, together, look like $k$ samples from a totally different Gaussian from any of the $k$ mixture components. As a result, we need to make sure we have more than $k^2/\alpha$ total points, because then an $\alpha/k$ fraction of the data points is more than $k$ total samples. We actually need an additional factor of $d$, because it turns out that we can even have $\Omega(d)$ points from each of the mixture components which look like $\Omega(kd)$ samples from a totally different Gaussian. 2. The only other known approach for privately learning unbounded and high-dimensional GMMs (density estimation) is [AAL24] that uses sample-and-aggregate. For the univariate setting (density estimation for GMMs with unbounded parameters), there is another approach that uses stability-based histograms [AAL21]. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I maintain my score.
Summary: This works focuses on the task of density estimation of a mixture of Gaussians under the restriction of differential privacy. Unlike parameter estimation, density estimation does not require accurately estimating the mixture's parameters, but instead bounds the total variation distance between estimated and true distributions. This task can be achieved even without any boundedness or separation assumptions on the parameters of the components. This task is known to require sample size of $O(k d^{2}/\alpha^{2})$ even in the non-private setting, where $k$ is the number of components, $d$ is the dimension, $\alpha$ is the bound on the TV distance, and the $O$ notation ignores logarithmic factors. In the private setting, previous work [1] achieved $\alpha$ accuracy guarantee under $(\epsilon, \delta)$-DP with $O(k^{2} d^{4}/\alpha^{2} \epsilon)$ (the exact bound includes several other terms that depend on $\log(1/\delta)$ as well). This work provides an improved bound, reducing the dependence on the parameters to $O \left(\frac{k d^{2}}{\alpha^{2}} + \frac{k d^{2}}{\alpha \epsilon} \right)$ in the high dimensional regime, $O \left(\frac{k}{\alpha^{2}} + \frac{k \cdot \log(1/\delta)}{\alpha \epsilon} \right)$ in the univariate setting, and lower bounds which asymptotically match the upper bound in the univariate case and nearly match it in the multivariate one. The proposed algorithm relays on first achieving crude estimation of the parameters and then using hypothesis selection to improve the estimation. The crude estimation is achieved using robust GMM estimation techniques and robusteness-to-privacy conversion, based on an inverse sensitivity-like instantiation of the exponential mechanism. This method is computationally inefficient. [1] Mohammad Afzali, Hassan Ashtiani, and Christopher Liaw. Mixtures of gaussians are privately learnable with a polynomial number of samples. In Algorithmic Learning Theory, pages 1–27. PMLR, 2024. Strengths: The results presented in this work provide a significant improvement over the previously known ones. Though the proof technique is highly involved, the authors presented the proof outline in a relatively intuitive way, and provided motivation for the various steps it includes. Weaknesses: Despite the great work done by the authors and my best efforts, I was not able to follow up all the proof structure. In particular, I could not find the justification for some of terms that appeared in the bound presented on Theorem 1.3. It seems to me like a section that "puts everything together" will be of great benefit. I will try to describe my current understanding and existing gaps. To the best of my understanding: * The $\frac{k d^{2}}{\alpha^{2}}$ and $\frac{k d^{2}}{\alpha \epsilon}$ were explained at the "Reducing to “crude approximation”" section, and they represent the sample size required to accurately and privately learn the GMM given some crude estimation of its components, using hypothesis selection method. * The $\frac{d^{1.75} k^{1.5} \sqrt{\log(1/\delta)}}{\alpha \epsilon}$ was explained at the "Improving Dimension Dependence" section, where $\frac{d^{1.75} k}{\alpha \epsilon}$ results from the fact $O\left(\frac{d^{1.75} k}{c \epsilon} \right)$ points are required to get the crude estimation of the parameters of a single component under $c$-robustness, which then can be translated to a DP estimation with $\alpha$ accuracy using rubousteness-to-privacy conversion techniques (Theorem 2.1), and the additional $\sqrt{k \cdot \log(1/\delta)}$ term results from advanced composition over $k$ components. * I failed to understand where the additional two terms ($\frac{(k \cdot \log(1/\delta))^{1.5}}{\alpha \epsilon}$ and $\frac{k^{2} d}{\alpha}$) come from, and I suspect they were accumulated at some point during the rubousteness-to-privacy transformation. Technical Quality: 3 Clarity: 3 Questions for Authors: As I mentioned before, I will find an additional section that brings all the components together to outline the final proof method, focusing on the final achieved bound, to be very useful. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. We are happy to add some description that puts everything together and explain where each of the terms come from, as suggested by the reviewer. In the following, we explain the terms in the sample complexity that the reviewer asked about. To explain the $\frac{(k \log (1/\delta))^{1.5}}{\alpha \epsilon}$ term, we note that in the crude estimation part, the sample complexity (from applying Theorem C.3, the formal version of Theorem 2.1) also has an assumption (see the “Volume” bullet) that $n \ge \frac{\log (1/\delta_0)}{\epsilon_0 \cdot \eta^*}$. Here, $\epsilon_0, \delta_0$ represent the privacy terms for a single iteration of the crude estimation (to learn one parameter), and $\eta^*$ will represent the robustness threshold, and ends up being roughly $\alpha/k$. The reason the robustness threshold depends on $k$ like this is that each component on average has weight $1/k$, so you can corrupt $1/k$ fraction points and completely alter a component. Also, since we run the crude estimation $k$ times, we can use advanced composition to say that if each iteration was $(\epsilon_0, \delta_0) = (\frac{\epsilon}{\sqrt{k \log (1/\delta)}}, \frac{\delta}{k})$-DP, the overall algorithm is $(\epsilon, \delta)$-DP. With all of these parameters set, we precisely get $ \frac{\log (1/\delta_0)}{\epsilon_0 \cdot \eta^*} = \Theta\left(\frac{(k \log (1/\delta))^{1.5}}{\alpha \epsilon}\right)$. To explain the $\frac{k^2 d}{\alpha}$ term, the point is that for the private algorithm to work, we need two things. First, the private algorithm finds a covariance matrix (or more precisely, a mean-covariance pair $(\tilde{\mu}, \tilde{\Sigma})$) with low score (see Line 932 and the above few lines for the definition). Second, if a mean-covariance pair has a low score, then it is actually similar to a true mixture component $(\mu_i, \Sigma_i)$. The previous terms of $\frac{d^{1.75} k^{1.5} \sqrt{\log (1/\delta)}}{\alpha \epsilon}$ and $\frac{(k \log (1/\delta))^{1.5}}{\alpha \epsilon}$ are needed for the first part. However, the $\frac{k^2 d}{\alpha}$ term is needed for the second part: that any mean-covariance pair of low score is similar to some true mixture component (this is the goal of Proposition E.4 in our paper). The reason for the $k^2$ dependence is nontrivial so here’s a high-level intuition. Given a dataset $X$ and mean-covariance pair $(\tilde{\mu}, \tilde{\Sigma})$, the score $S((\tilde{\mu}, \tilde{\Sigma}), X)$ is small if there exists roughly $\alpha/k$ fraction of data points that “look like” they came from $\mathcal{N}(\tilde{\mu}, \tilde{\Sigma})$. The point is that one can have one point come from each of $k$ different mixture components which, together, look like $k$ samples from a totally different Gaussian from any of the $k$ mixture components. As a result, we need to make sure we have more than $k^2/\alpha$ total points, because then an $\alpha/k$ fraction of the data points is more than $k$ total samples. We actually need an additional factor of $d$, because it turns out that we can even have $\Omega(d)$ points from each of the mixture components which look like $\Omega(kd)$ samples from a totally different Gaussian. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanation, and hope it will be reflected in the final version, so that all readers will have the opportunity to fully comprehend this important result.
Summary: The paper studies the problem of learning a mixture of $k$ $d$-dimensional Gaussians using a differentially private mechanism with respect to the samples. It provides an improved sample complexity which has asymptotically optimal dependence on the dimension $d$ for small $k$, the lower bound is also given by the paper. Additionally, for $d=1$ the paper provides optimal sample complexity. The paper follows a high level approach of obtaining crude approximations of the mean and covariances of the component Gaussians, which can be used to reduce – using a net based argument – the problem to private hypothesis selection for which existing algorithm (needing only log the size of the net number of samples) can be used. To obtain the crude approximations respecting the DP guarantee, the paper uses a variant of the exponential mechanism with a carefully constructed scoring function measuring the distance between the a candidate Gaussian and any “heavy” component of Gaussian mixture. This uses a number of samples depending on the size of the sample set $n$ and the dimensionality the hypotheses $d^2$. A naive approach fails due to the blowup incurred, and the paper utilizes sample compression to reduce the dependence to $O(d\\log n)$ and improves the dimensionality dependence to $O(d)$ via an estimation argument. The above is outlined in more detail in the main paper and proved formally in the appendices. While all the details have not been verified by the reviewer, the approach taken by the paper seems correct. Strengths: 1. The paper proves improved (and in some cases optimal) sample complexity for privately learning Gaussian mixtures. 2. The paper uses a novel crude approximation based approach paired with existing algorithm for private hypothesis selection. 3. The paper leverages inverse sensitivity mechanisms for decoding the crude approximations, techniques for sample compression, and combines them with a way to improve the dimensionality dependence. 4. The main result as well the one for univariate case and the lower bound together constitute notable progress on a well studied problem. 5. The paper is well written and the provided roadmap greatly aids the understanding. Weaknesses: 1. The sample complexity does not match the lower bound in the dependence on $k$. 2. The degradation on the dependence on $k$ from the univariate to the multi-variate case is not explained in the main paper. 3. The details of the various technical parts are tedious and could be alleviated by better presentation, e.g. by listing all the parameters and their dependencies in a table. Technical Quality: 3 Clarity: 4 Questions for Authors: It will be useful to combine definitions 1.1 and 1.2 into a precise definition of privately learning GMMs. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed feedback. We now address some of the issues/questions raised by the reviewer. Note that the algorithm for the univariate case (Section F) is completely different from the multivariate case (Section E). The algorithm in the univariate case requires us to order the data points from smallest to largest, which doesn’t make sense in high dimensions. So, it only works for $d = 1$. We could use the multivariate algorithm when $d = 1$ as well, but we would get a worse dependence on $k$ (more specifically, we would get $k^2$ in the bound rather than linear dependence on $k$). We are happy to list our results, along with previous results, in a table so that one can easily compare our work with previous work (as well as compare upper/lower bounds). We also agree that a final definition combining Definitions 1.1 and 1.2 would be useful, we will add that immediately after these two definitions. --- Rebuttal Comment 1.1: Comment: I acknowledge the rebuttal by the authors. Just to clarify, in my weakness no. 3 comment, I had suggested listing the parameters used in the different proofs and their dependencies in table(s). This would make the proofs a bit more accessible in my opinion and I hope the authors will look into it. Overall my rating of Accept remains unchanged.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their helpful feedback. We apologize for the difficulty in reading the paper. We will make changes as suggested by the reviewers to improve the readability of the paper, such as adding a table of results, adding some additional technical description to the main body (up to space limitations), and adding some outline and summary sections in the appendix so that it will be more structured and understandable.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding
Accept (poster)
Summary: This paper present an innovative, pioneering framework for Continual Test-Time Adaptation in multi-task point cloud understanding, enhancing the model’s transferability towards the continually changing target domain. Strengths: Introducing CTTA into a multi-task 3D vision setting is practical and realistic. The implementation of all modules is based on existing challenges in CTTA. Weaknesses: The integration of CTTA with multi-task point clouds is not well developed, and the experimental design is not entirely reasonable. The innovative method does not clearly distinguish itself within this specific setting. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In the Introduction section, "On the other hand, few works like MM-CCTA" should be changed to "On the other hand, few works like MM-CTTA." 2. The representation of each learnable prototype in (a) of Figure 3, which pairs once with all Source prototypes, lacks clarity and does not reflect the motivation well. 3. The described new domain leans more towards a new dataset, and in experiments, this domain change occurs only once. Is this specific CTTA or traditional TTA? 4. What are the specific challenges of combining CTTA with point cloud multi-task learning, especially since the method in this paper is quite general and can also be applied to 2D tasks? 5. What pretrained models were used in methods like CTTA? How did they utilize two source domains? Why is there such a significant difference in experimental accuracy? Can you provide a detailed analysis of the reasons behind this large gap, which I believe is not solely due to the method proposed in this paper? 6. This paper does not clearly explain how learnable prototypes are obtained and learned. What about their specific quantities, and does this quantity also affect the experimental results? Could you provide ablation experiments? 7. There is a lack of experiments and detailed evaluations to assess the effects of error accumulation and forgetting resistance. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The challenge of the task, say CTTA for point cloud is unclear, and the authors did not explain why do existing image-based methods (such as Tent, CoTTA, RMT) not appropriate to the task. The novelty is also confused, the reviewer cannot understand the necessary of building a complicated graph instead of directly injecting prototype information to the current data point. If I miss something important of the paper, please let me know. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your detailed review. We're glad to see your kind recognition of the innovation of our pioneering framework for Continual Test-Time Adaptation in multi-task point cloud understanding. In the following, we will address each of your concerns: **Q1: Typo.** Thanks. We will carefully proofread the paper and revise all typos in the revision. **Q2: Learnable prototype in Fig 3(a).** Sorry for the confusion. Each learnable prototype is designed to capture distinct features and characteristics of the target domain, paving the way for handling subsequent unknown testing data. These learnable prototypes are dynamic and adjusted during the adaptation process to better align with the source prototypes, which represent the source domains’ features. Our APM pairs and mixes based on the similarity between the source-learnable prototypes and the current target, effectively incorporating all source domain information. Catastrophic forgetting is effectively mitigated by explicitly fusing inherent source representations (prototypes) and applying graph attention mechanisms to target features, thus achieving good stability of the models. **Q3: CTTA setting.** Strictly following typical CTTA methods \[9, 37\], we perform continual test-time adaptation on one additional new dataset, consisting of two target domains to ensure fair comparisons. Our testing samples’ sequence changes randomly and continuously between these two target domains, ensuring a fair comparison in line with the CTTA setting. We will clarify it in the revision. **Q4: Specific challenges of PCoTTA.** Simply combining CTTA with point cloud learning faces great challenges. Firstly, compared to grid-structured 2D images, 3D point clouds are unordered and more challenging in CTTA. To address this, we specifically designed a graph attention mechanism to fully learn token sequences within a complete graph structure. This effectively captures contextual relationships and semantic features between unordered and irregular point cloud patches. Secondly, the catastrophic forgetting issue is underexplored for point cloud multi-task learning in continually varying domains, where the model would inevitably forget the knowledge of previously learned tasks while adapting to new ones. Particularly, in 3D point cloud understanding, different tasks are more challenging due to variations in data density and distribution. It’s harder to obtain a unified model that generalizes over raw 3D points compared to grid-like 2D images. To address this, we build a new CoTTA benchmark for multi-task point cloud understanding, and devise task-specific prototype banks, including both source and learnable prototypes, and design APMs by explicitly fusing inherent source prototypes with the current target and applying graph attention mechanisms to target features, thus achieving good stability of the models. Note that our PCoTTA is specifically designed for 3D point clouds, but it has potential to be adjusted to 2D tasks and we will explore it in future work. **Q5: More analysis for the experiment results.** We reproduced the CTTA method using PIC as the backbone network. In our benchmark, all sources are treated as one expanded dataset for PIC pre-training, enabling the integration of multi-domain information. Notably, our PCoTTA forms prompt-query pairs from two different sources. PIC shows significant results and excels in multi-task scenarios, but it lacks specific designs for multi-domain learning. In contrast, our PCoTTA effectively addresses this by aligning the testing data features with the familiar source prototypes and dynamically updating learnable prototypes through the GSFS module. To the best of our knowledge, no existing CTTA methods support multi-task learning. We propose to integrate CTTA and PIC to facilitate multi-task and multi-domain learning, and our novel design achieves significant improvements. In particular, our task-specific prototype bank enhances multi-task learning capabilities, and the Gaussian Splatted-based Graph Attention efficiently refines target data representation to align with source domains. **Q6: Learnable prototypes.** Learnable prototypes are initialized as trainable parameters and designed to capture semantic features across target domains. In CPR, these prototypes become more distinct from each other through the repulsion loss L_pr, while the most similar prototype to the current target is drawn closer to the target features. In general, this process can be viewed as end-to-end unsupervised clustering, with separation based on the number of potential target domains. Consequently, the number of learnable prototypes ideally approximates the number of target domains, though this is not strictly required. We conducted an additional ablation study on the number of learnable prototypes, as shown in Table C in rebuttal PDF, and the results indicate minimal changes. Additionally, we show the case with no learnable prototypes (i.e., quantity 0), where our method degrades to aligning the target feature with solely considering source prototypes’ similarities. While this case achieves some degree of test-time adaptation, its performance is less decent than our PCoTTA. **Q7: More experiments to verify catastrophic forgetting and error accumulation.** Thanks for your valuable suggestions. Please refer to Q3@Reviewer SkYG for more details. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed responses from the authors, and they solved my concerns. Some further comments: 1. If no source prototype is given due to privacy protection , will the method still work? 2. Although the graph network is used for 3d data, it is not the main contribution of the paper, and graph attention network is not novel. I suggest the authors improve their presentation, and make readers know the difference of CTTA between 3d and 2d data. Otherwise, it should be a general method for any classification tasks. --- Rebuttal 2: Comment: **Q1:** Thanks for the comment. We’d like to point out that many recent papers of TTA/CTTA [I, II, III] reveal that the use of source prototypes like features, tokens, and statistics from the source domain do not pose privacy issues and can be utilized to further enhance the adaptability on target data. Firstly, TTAC [I] calculates category-wise and global statistics of source domains as anchors and pre-saves them for streaming online test-time adaptation. Similarly, the published work [II] generates the class-wise source prototypes before model deployment and presents an auxiliary task based on nearest source prototypes to align the source and target features. Besides, TPS [III] computes each class prototypes of source domains, enabling the prototypes to be cached and reused for all subsequent test-time prediction. Since our work is inspired by them, we strictly follow the same setting. In line with [I, II, III], our source prototypes are pre-cached before deployment and stored as constant parameters (token-like vectors) alongside the pre-trained source model. After deployment, our method does not access the source data to ensure no interaction with the source data is involved in the test-time adaptation stage. Different from them, our method focuses on domain-level prototypes instead of class-level prototypes, avoiding pseudo labeling of categories and instead highlighting the inherent features of the entire domain. Please note [I, II, III] are three examples of this common practice, and there are many emerging in recent years. We just wish to convey that this is indeed a well-recognized practice in the community. Similar to [I, II, III], source prototypes are the key and indispensable information that we exploit to devise our methodology. Without it, our method is incomplete in addressing the test-time feature shifting during the continual adaptation and would lead to less decent results. We have also analyzed the cases without source prototypes in our ablation studies, as shown by models B and C in Table B of the rebuttal PDF. In this case, our method relies solely on learnable prototypes (i.e., incomplete framework), achieving a certain degree of adaptation. Although this reduces our method's effectiveness, it still outperforms CoTTA [37]. Specifically, CoTTA achieves 58.3, 56.7, and 55.2 during the 3 different rounds, while our PCoTTA achieves 36.8, 36.2, and 35.7, demonstrating superiority and the state-of-the-art performance in continual test-time adaptation for 3D point cloud. In revision, we will improve the clarity regarding it in the paper. [I] Su et al. Revisiting realistic test-time training: Sequential inference and adaptation by anchored clustering. In NeurIPS 2022. [II] Choi et al. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In ECCV 2022. [III] Sui et al. Just Shift It: Test-Time Prototype Shifting for Zero-Shot Generalization with Vision-Language Models. Preprint in ArXiv 2024. **Q2:** Thanks for the suggestion. Firstly, we would like to stress that our main contribution is that we introduce a new multi-task continual test-time adaptation task for point cloud understanding, and present the first, pioneering framework PCoTTA for this new task, rather than merely a graph attention module. The proposed PCoTTA improves the point cloud model's adaptability and robustness in continuously changing domains by aligning the target domains with all source domains. Besides, we also present a new multi-task benchmark for this new CoTTA setting. Secondly, although graph attention has been previously studied in the domain adaptation field, it has not been exploited in continual test-time adaptation or the CTTA scenarios in 3D data. Graph attention is effective at capturing contextual relationships between nodes on 2D images as pixels are regularly distributed, but it may be less effective on unordered 3D data. Considering this, we formulate a Gaussian kernel function to calculate attention coefficients based on token similarity, which does not require ordered input and facilitates unordered point cloud patches, thus characterizing 3D point cloud data. We then devise Gaussian Splatted-based Graph Attention that takes the calculated attention coefficients as input and fuses source-learnable prototype pairs with the target data features, aligning them with the sources. This method enables comprehensive, patch similarity-based adaptation for effective CTTA in 3D point cloud understanding. Lastly, the key difference between a 3D point cloud and a 2D image is that 3D data is disordered, unstructured, and sparsely distributed, making traditional 2D image methods less effective or even cannot be directly applied. As aforementioned, our method involves specific designs for 3D point cloud data, which may need extra adjustments when applying to 2D images, and may lead to less decent performance on 2D images. --- Rebuttal 3: Title: Further clarification for CTTA setting Comment: Thank you again, and we’d like to provide a bit further clarification here. Below shows our setting belongs to the CTTA and justifies the exploitation of source prototypes. (Our method is incomplete but still works without exploiting source prototypes information, still outperforming CoTTA [37]. Please see our last response above). This can be evidenced by the recent CTTA works [IV, V, VI] that involves source prototypes information during the continual test time adaptation. For example, RMT [IV] extracted source prototypes of each class and pre-cached them before the adaptation, and then used the source prototypes to calculate the contrastive loss during the continual test-time. Similarly, SANTA [V] also pre-computed the source prototypes before the adaptation and used them for the target alignment during the continual test time adaptation. Besides, OBAO [VI] also conducted in a similar manner and acknowledged that this is fair in the CTTA setting. As pointed out in Page 8 of this ECCV paper [VI], “Some previous methods [IV, V] directly penalize the movement of target domain samples in the feature space relative to the source prototypes. This can be broadly interpreted as penalizing the movement of corresponding elements between ˆV and Vt in our defined CRG.” Since we are inspired by and following these works, our setting belongs to the CTTA category. We would revise the paper to re-clarify this issue to eliminate misunderstandings. Besides, we reproduce two CTTA methods, RMT [IV] and SANTA [V], which use source prototypes, and present the comparison results in the Table below, demonstrating that our method still outperforms these CTTA methods with source prototypes. The reasons for the superiority to these methods lie in three aspects, shown below. 1) Usually, these methods heavily rely on the student-teacher architecture to realize the consistency regularization. As a result, they would inevitably introduce the pseudo label noise in their approaches, leading to error accumulation. Although they use symmetric cross-entropy or other techniques to alleviate the pseudo label noise, such problems still exist and cannot be fundamentally addressed. In contrast, our PCoTTA framework does not use any online or offline pseudo labeling techniques [9, 37], which inherently avoids the risk of error accumulation. In Table 1 of the manuscript, the results across 3 continuous rounds also illustrate the effect in avoiding error accumulation. 2) These methods are specifically designed for CTTA in 2D images and perform well on 2D images. However, compared to 2D images, 3D point cloud data is disordered, unstructured, and sparsely distributed, making these 2D image-based CTTA methods less effective or even cannot be directly applied. Our method involves specific designs for 3D point cloud data, e.g., Gaussian Splatted-based Graph Attention for comprehensive, patch similarity-based adaptation, well-suited for 3D data, and achieves better performances than these methods. 3) These methods often focus on single tasks and all lack specialized design in multi-task learning, which may lead to gradient conflicts in the optimization process of continual test-time adaptation. Instead, Our PCoTTA devises task-specific prototype banks where individual source-learnable prototype pairs are used for different adaptations in each task, thus favoring the multi-task learning in our setting. | Rounds | 1 | | 2 | | 3 | | | :------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | Target Domains | ModelNet40 | ScanObjectNN | ModelNet40 | ScanObjectNN | ModelNet40 | ScanObjectNN | | Methods | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | | RMT [IV] | 31.2/44.0/34.3 | 47.4/59.6/39.9 | 30.6/43.5/33.9 | 45.6/53.0/35.8 | 30.4/42.7/33.8 | 45.9/51.1/36.4 | | SANTA [V] | 32.3/42.1/37.8 | 44.9/55.2/38.6 | 31.7/41.9/37.4 | 42.0/53.4/35.6 | 30.1/41.6/36.4 | 40.6/52.9/34.7 | | Ours | 6.3/21.4/15.4 | 8.9/28.3/20.7 | 5.5/19.9/14.6 | 8.5/26.9/19.6 | 5.4/18.6/12.1 | 8.2/25.2/19.3 | [IV]. D ̈obler et al. Robust Mean Teacher for Continual and Gradual Test-Time Adaptation. In CVPR 2023. [V]. Chakrabarty et al. SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation. In TMLR 2023. [VI]. Zhu et al. Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation. In ECCV 2024. --- Rebuttal Comment 3.1: Comment: Dear Reviewer 5Vdz, We thank you for your time in providing further comments. We have carefully responded to them. Could you please take a few minutes to review these responses? If anything is unclear, we are happy to clarify further. Thank you --- Reply to Comment 3.1.1: Comment: Dear Reviewer 5Vdz, We thank you so much for raising your rating and for your support. We will revise the paper based on your constructive comments.
Summary: The paper introduces an innovative and unified framework for Continual Test-Time Adaption in multi-task point cloud understanding, which includes reconstruction, denoising and registration. The framework integrates with three new modules for different purposes, i.e., automatic prototype mixture for preventing catastrophic forgetting, Gaussian Splatted feature shifting for mitigating error accumulation and contrastive prototype repulsion learning the distinctive features implicitly. Experimental results on public datasets demonstrate the state-of-the-art performance of the proposed framework and the effectiveness of the proposed modules. Strengths: 1. Writing quality is good. The paper is well-structured, and clearly written. 2. SOTA performance. The proposed framework outperforms the state-of-the-arts with a impressively large margin on three 3D point cloud understanding tasks: reconstruction, denoising and registration. 3. Ablations. Ablation experiments are provided to verify the effectiveness of the proposed modules. Weaknesses: 1. Insufficient explanations and verifications. Although the results presented in Table 3 reveal the performance improvement achieved by each proposed module, their underlying purposes such as preventing catastrophic forgetting and error accumulation are not readable. More specifically, the reader may not be able to judge from these values why the APM can resolve the catastrophic forgetting. The author should provide deeper analysis and more effective verification/visualization to support the claimed effects. 2. Counterintuitive “source prototype estimation”. If I did not get it wrong, the tokens output from PointMAE should be inordered and irregular. How and why can the prototype be calculated by averaging all tokens without considering their permutations? Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation is included in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate your thorough review. We are pleased to see your kind recognition of the innovation of our framework for Continual Test-Time Adaptation in multi-task point cloud understanding and our three new modules. Additionally, we appreciate your acknowledgment of the effectiveness demonstrated through extensive experimental results. In the following, we will address each of your concerns: **Q1: More experiments to verify catastrophic forgetting and error accumulation.** As for the issue of catastrophic forgetting, we have verified it with both quantitative and qualitative experiments. As shown in Table 1 of the manuscript, our continual test-time adaptation setting is similar to the one in CoTTA \[37\] where we track the continuous performance across 3 independent evaluation rounds, with samples shuffled randomly in each round. The results in 3 rounds demonstrate the stability, i.e., robust continuous online learning abilities, and resilience against catastrophic forgetting in continually varying target domains. Moreover, we provide T-SNE visualizations for three independent validation rounds in Figure A and task-specific visualizations for each round in Figure B in the rebuttal PDF. Our method remains stable across continuous rounds, demonstrating that our proposed APM and GSFS effectively mitigate catastrophic forgetting by explicitly leveraging constant source prototypes and source domain representations, thereby avoiding over-reliance on adaptively learned information. Finally, our APM pairs and mixes based on the similarity between the source-learnable prototypes and the current target, effectively incorporating all source domain information. As a result, catastrophic forgetting is effectively mitigated by explicitly fusing inherent source representations (prototypes) and applying graph attention mechanisms to target features, thus achieving good stability of our models. As for the issue of error accumulation, we have verified it both theoretically and empirically. Firstly, as evidenced by \[38\], pseudo labels tend to introduce pseudo label noise and could lead to error accumulation in long-term adaptation. In contrast, in our PCoTTA framework, we do not use any online or offline pseudo labeling techniques \[9, 37\], which inherently avoids the risk of error accumulation. In Table 1 of the manuscript, the results across 3 continuous rounds also illustrate the effect in avoiding error accumulation. **Q2: Source prototype estimation.** Sorry for the confusion. Following Point-MAE \[26\], our method reshuffles the patch-wise tokens after the Transformer decoder to maintain their order. Additionally, similar to PIC \[9\], during the testing, our mask is applied to the query target (i.e., the test output), and therefore, patch-wise token shuffling is not needed. Averaging all tokens to form a source prototype ensures a general and comprehensive representation of source domains, avoiding bias from individual samples. This permutation-invariant method effectively summarizes the domain’s information. We will improve the clarity in the revision. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ni1v, We thank you for your time in reviewing our paper. We have carefully responded to your concerns in the rebuttal. If possible, could you please take a few minutes to review these responses? If anything is unclear, we are happy to clarify further. Thank you --- Rebuttal Comment 1.2: Comment: I am glad to have the author's feedback. The rebuttal somewhat addressed my concerns (Q2). However, Q1 still remains questioned since I was asking about the evidence of the underlying effect of each component, not the ultimate architecture. For instance, there should be a visualization comparison between w/ and w/o APM to see if the samples indeed avoid unsatisfactory alignments. Thus I hold my rating. --- Rebuttal 2: Comment: Thank you for your valuable comment. As per your comment, we conducted the experiment with and without APM using T-SNE visualization just now, obviously indicating undesirable alignment between targets and sources when APM is not used. This is also evidenced by the quantitative results of models B and C in Table B of the rebuttal PDF. Given the T-SNE visualization figure is not able to be attached here, we will include it in the revision.
Summary: This paper introduces a novel framework designed to enhance model transferability in continually changing target domains for multi-task point cloud understanding. The framework, termed PCoTTA, includes three key components: Automatic Prototype Mixture (APM), Gaussian Splatted Feature Shifting (GSFS), and Contrastive Prototype Repulsion (CPR). These components work synergistically to prevent catastrophic forgetting, mitigate error accumulation, and ensure the distinguishability of prototypes during adaptation. The authors present comprehensive experimental results demonstrating the superiority of PCoTTA over existing methods across multiple tasks, including point cloud reconstruction, denoising, and registration. Strengths: [1] Innovative Framework: The introduction of PCoTTA is pioneering in the field of continual test-time adaptation for multi-task point cloud understanding. The framework's design is both practical and realistic, addressing a significant gap in the current state of research. [2] New Benchmark: The creation of a new benchmark for practical continual test-time adaptation in multi-task point cloud understanding is a valuable contribution to the field, facilitating future research and comparison. [3] Experimental Validation: The paper provides extensive experimental results across multiple tasks and domains, demonstrating the effectiveness and superiority of the proposed method. [4] Writing quality: This paper is written and organized well Weaknesses: [1] Limited Task Variety: While the paper demonstrates the framework's effectiveness , it could benefit from including a broader range of point cloud tasks to further validate its versatility and robustness. E.g. Traditional domain adaptation task, classification task on ModelNet -> ScanObjectNN [2] Real-World Application: Although the framework is tested on both synthetic and real-world datasets, more discussion on real-world applicability and potential challenges in diverse practical scenarios would strengthen the paper. [3] Efficiency Metrics: The paper primarily focuses on the effectiveness of the framework. It would be beneficial to provide more analysis on the throughput/inference speed. [4] Comparison with Broader Techniques: It will be interesting to include more point cloud understanding methods like PointNext, and domain adaptation techniques. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to the Weakness. I will improve my rates when the weaknesses are solved. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Refer to the Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your thorough review and valuable feedback. We are pleased that you recognized the novelty and practical application of our PCoTTA framework for continual test-time adaptation in multi-task point cloud understanding. We appreciate your acknowledgment of our new benchmark aimed at advancing future research. Additionally, we thank you for your confirmation that our experimental validation and clear writing effectively demonstrated the superiority of our method. In the following, we will address each of your concerns: **Q1: Other tasks.** Thanks for your valuable suggestions. Following PIC \[11\], our PCoTTA is fundamentally designed for regression tasks, making them ‘unified’ with position output (x, y, z) and a single loss. This focus on regression tasks inherently affects its ability to discrimination tasks such as classification. Honestly, multi-task learning in point cloud is still in its early stage, and our focus does not lie in unifying as many tasks as possible. In future work, we would like to specifically enhance the diversity of tasks by developing models applicable to other tasks like classification. **Q2: Real-world application.** Thanks. Our method shows potential for many real-world applications, e.g., autonomous driving and virtual reality, as indicated by the analysis of its computational efficiency, e.g., number of parameters and running time, in Table D. Since our PCoTTA is an end-to-end test-time adaptation method that does not employ a teacher-student model or pseudo labeling technique, it is more efficient and suitable for real-time deployment. However, other tasks like classification are not considered provided we are following PIC’s multi-task setting. As future work, we would like to investigate on how to enhance the diversity of point cloud understanding tasks within a single framework. In addition, though our model is efficient (0.06 seconds at inference), we still need to consider the computing power of devices, and may need to further enhance its efficiency through reducing the size of the backbone model. **Q3: Efficiency metrics.** We present an analysis of model parameters and running time in Table D in the rebuttal PDF. The results show that our method can infer target data in a fast speed, and our model has the fewest parameters compared to other CTTA methods. **Q4: More Comparisons.** As suggested, we compare our PCoTTA with PointNext. With the same setting, we evaluate it on our new benchmark, where it is trained on all source domains and tested on unseen targets with 3 independent evaluation rounds. Moreover, we reproduced ViDA \[1\], a specialized method for CTTA. Table A shows our method’s superiority. \[1\] Liu et al. Vida: Homeostatic visual domain adapter for continual test time adaptation. In ICLR 2024\. --- Rebuttal Comment 1.1: Comment: Thanks for responses. Keep my ratings. Good luck ! --- Reply to Comment 1.1.1: Comment: We thank you so much for your positive support. We will definitely revise our paper accordingly based on your valuable comments.
Summary: This paper presents a new point cloud benchmark for Continual Test-Time Adaptation (CTTA) and compiles relevant 3D datasets. Additionally, this paper devises three innovative modules for PCoTTA, including automatic prototype mixture (APM), Gaussian splatted feature shifting (GSFS), and contrastive prototype repulsion (CPR) strategies, to collectively address the issues of catastrophic forgetting and error accumulation in CTTA tasks. Strengths: 1. The point cloud CTTA dataset compiled in this paper is highly significant. 2. The idea of using Automatic Prototype Mixture to avoid catastrophic forgetting is sensible, but it does not align well with the standard CTTA setting, which cannot access source domain data. 3. Good writing ensures that the contributions of the paper are clearly understandable. Weaknesses: 1. My main concern is that the method design violates the basic setting of the CTTA task. The CTTA task stipulates that source domain data cannot be accessed to better simulate real-world applications and ensure data privacy. However, this paper utilizes source prototypes, which are derived from both source domain data and the source model, thus not complying with the regulations. Additionally, this comparison is unfair because if this paper requires the use of source prototypes, then other CTTA methods should also be allowed to use source prototypes. 2. The paper is severely missing related works and the necessary CTTA baselines for comparison, including [a], [b], [c], [d], etc. [a] EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization. [b] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [c] Towards stable test-time adaptation in dynamic wild world. [d] Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation 3. Relying solely on T-SNE and the improvement in main experiment accuracy to validate the method's solution to catastrophic forgetting and error accumulation is insufficient. More extensive experiments and theoretical proof are needed to substantiate your claims. 4. The ablation study should include scores for using only the contrastive prototype repulsion or Gaussian splatted feature shifting individually to highlight the significance of each contribution. Additionally, contrastive prototype repulsion should not be considered an independent contribution, as it is a well-established technique. 5. The visualizations in the main text should include comparisons with other CTTA methods. Technical Quality: 2 Clarity: 3 Questions for Authors: If the authors address the issues mentioned in the weaknesses, I am willing to increase the rating. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your review. We are pleased that you recognized the significance of our compiled point cloud CTTA dataset and the novelty of our three modules to address catastrophic forgetting and error accumulation. Additionally, we are glad that the clarity of our writing ensured a clear understanding of our paper's contributions. In the following, we will address each of your concerns: **Q1: CTTA setting.** Thanks. We acknowledge that the CTTA task prohibits access to source domain data to ensure real-world applicability and data privacy. Strictly following this protocol, our PCoTTA, which utilizes source prototypes, is designed carefully to ensure a fair comparison. The source prototypes are derived exclusively from the source model, which is pre-trained before any adaptation. Source prototypes leverage the structured knowledge embedded within the source model, aiming to improve the robustness and efficiency of the adaptation without compromising data privacy or violating the setting of CTTA. This is analogous to utilizing the parameters (prompts) of a pre-trained source model \[9\] in CTTA, and is a common practice in CTTA. **Q2: Missing related works.** Following your advice, we will include these papers in the revised version. Since \[a\], \[c\], and \[d\] do not have official open-source code or models available, we were unable to compare our results with them. We reproduced \[b\] in our setting and evaluated it on our proposed benchmark. The results are shown in Table A, indicating that \[b\] just performs similarly to CoTTA \[37\], and our method outperforms it significantly. We attribute this to its focus on domain adaptation without specific designs for multi-task learning, which prevents it from outperforming our method. **Q3: More experiments to verify catastrophic forgetting and error accumulation.** As for the issue of catastrophic forgetting, we have verified it with both quantitative and qualitative experiments. As shown in Table 1 of the manuscript, our continual test-time adaptation setting is similar to the one in CoTTA \[37\] where we track the continuous performance across 3 independent evaluation rounds, with samples shuffled randomly in each round. The results in 3 rounds demonstrate the stability, i.e., robust continuous online learning abilities, and resilience against catastrophic forgetting in continually varying target domains. Moreover, we provide T-SNE visualizations for three independent validation rounds in Figure A and task-specific visualizations for each round in Figure B in the rebuttal PDF. Our method remains stable across continuous rounds, demonstrating that our proposed APM and GSFS effectively mitigate catastrophic forgetting by explicitly leveraging constant source prototypes and source domain representations, thereby avoiding over-reliance on adaptively learned information. Finally, our APM pairs and mixes based on the similarity between the source-learnable prototypes and the current target, effectively incorporating all source domain information. As a result, catastrophic forgetting is effectively mitigated by explicitly fusing inherent source representations (prototypes) and applying graph attention mechanisms to target features, thus achieving good stability of our models. As for the issue of error accumulation, we have verified it both theoretically and empirically. Firstly, as evidenced by \[38\], pseudo labels tend to introduce pseudo label noise and could lead to error accumulation in long-term adaptation. In contrast, in our PCoTTA framework, we do not use any online or offline pseudo labeling techniques \[9, 37\], which inherently avoids the risk of error accumulation. In Table 1 of the manuscript, the results across 3 continuous rounds also illustrate the effect in avoiding error accumulation. **Q4: More ablations and clarification of CPR.** Thanks. As per your constructive advice, we present additional ablation studies in Table B, evaluating the use of CPR and GSFS individually. These results clearly demonstrate the incremental benefits of each component and their combined effect on improving performance. While contrastive learning is a well-established technique, our CPR leverages domain prototypes (the new knowledge), introducing a novel aspect to the method. Traditional contrastive learning focuses on instance-level representations, whereas our approach innovates on the use of domain-level learnable prototype interactions. Equipped with the proposed CPR and learnable prototypes, our method provides structured and informative representations of various target domains to guide the adaptation process. We will revise this in the new version. **Q5: Visualizations comparisons with other CTTA methods.** Thanks. We have provided the visual results of other CTTA methods in Appendix A.3. Our PCoTTA excels in producing high-quality predictions across multiple tasks, even as the target domain changes. This is due to our three innovative modules, which minimize discrepancies between source and target domains, enhancing overall prediction quality. We will include these visualizations in the revised paper. --- Rebuttal 2: Title: Remaining Concerns Comment: I appreciate the author’s response and the additional experiments, but I still have some unresolved concerns. 1)CTTA setting problem: First, I am very clear about the description of the CTTA setting in papers [9] and [37]. In CTTA, it is permissible to use source domain pre-trained model parameters. So, I would like to ask if the source prototypes you used are the features/tokens extracted by the model or the model's parameters/token-wise visual prompt. As you described in Line 158, “we save all source prototypes Zs derived from the model at the last epoch,” so you not only used the model parameters but also the source prototypes. In the CTTA setting, only the pre-trained source model is allowed to be accessed; you cannot access any part of the source model's training process, as this would still involve interacting with the source data. How did you obtain the source prototypes? For example, if the source model is already deployed on an end-device, how do you obtain the source prototypes? Existing source-free TTA and source-free CTTA methods [35, 37, a, b] do not access any source domain features/tokens. Therefore, I suggest that the method proposed in this paper should not be compared with source-free methods. 2)Paper [a] has official open-source code available at https://github.com/Lily-Le/EcoTTA. 3)What are the details of the reproduction of paper [b] in table A? Why is it performing worse than the source model? If my remaining concerns can be addressed, I am willing to improve my rating. --- Rebuttal 3: Comment: Thanks for your comments. We would like to further address your concerns as follows: **Q1:** Sorry for causing confusion. We agree that the source prototypes we used are the tokens extracted by the model during the source pretraining stage and pre-computed and pre-saved in the device for test-time adaptation. We’d like to stress that this does not violate the fairness of the CTTA setting. In fact, many recent papers of TTA/CTTA [I, II, III] reveal that the use of source prototypes like features, tokens, and statistics from the source domain do not pose privacy issues and can be utilized to further enhance the adaptability on target data. Firstly, TTAC [I] calculates category-wise and global statistics of source domains as anchors and pre-saves them for streaming online test-time adaptation. Similarly, the published work [II] generates the class-wise source prototypes before model deployment and presents an auxiliary task based on nearest source prototypes to align the source and target features. Besides, TPS [III] computes each class prototypes of source domains, enabling the prototypes to be cached and reused for all subsequent test-time prediction. Since our work is following and inspired by them, we strictly follow the same setting. In line with [I, II, III], our source prototypes are pre-cached before deployment and stored as constant parameters (token-like vectors) alongside the pre-trained source model. After deployment, our method does not access the source data to ensure no interaction with the source data is involved in the test-time adaptation stage. Different from them, our method focuses on domain-level prototypes instead of class-level prototypes, avoiding pseudo labeling of categories and instead highlighting the inherent features of the entire domain. As per your constructive advice, we will carefully re-clarify this issue in the revised manuscript to eliminate misunderstandings. [I] Su et al. Revisiting realistic test-time training: Sequential inference and adaptation by anchored clustering. In NeurIPS 2022. [II] Choi et al. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In ECCV 2022. [III] Sui et al. Just Shift It: Test-Time Prototype Shifting for Zero-Shot Generalization with Vision-Language Models. Preprint in ArXiv 2024. **Q2:** Thank you for the information, we have also noticed this link, however, this is a community implementation rather than the official code. Additionally, according to the issue mentioned here https://github.com/Lily-Le/EcoTTA/issues/1#issuecomment-1667177037, the repository's author noted that the replicated results were not satisfactory and unsuccessful. Nonetheless, we still tested it on our new benchmark and obtained sub-optimal results. Therefore, we chose not to include these results in the rebuttal process to avoid any biased comparisons. We have cited this inspiring work in the revision and would endeavor to reproduce EcoTTA for comparisons in the future. **Q3:** Apologies for the confusion. Firstly, PointNext in Table A is another point cloud understanding method that we reproduced under the multi-task setting, as suggested by Reviewer ynhY for additional comparisons. It is not the source model for ViDA [b]. Since we are following the same settings as CoTTA [37] for reproduction, we also use PIC [9] as the source model for ViDA and thus equip ViDA with multi-tasking and multi-domain learning capabilities. As such, all these CTTA methods will start from the same source pre-trained model and ensure fair comparisons. From the table, we can observe that ViDA indeed achieves better results than PIC, but it does not outperform our method. The reasons for the superiority of our PCoTAA to ViDA lie in several aspects. Firstly, ViDA employs a teacher-student framework and uses a consistency loss where the teacher models’ predictions serve as the pseudo labels of the student model. However, as evidenced by [38], pseudo labels tend to introduce pseudo label noise and could lead to error accumulation in long-term adaptation. In contrast, in our PCoTTA framework, we do not use any online or offline pseudo labeling techniques [9, 37], which inherently avoids the risk of error accumulation. Secondly, ViDA is specifically designed for CTTA in 2D images and cannot well tackle the CTTA in 3D data. This is because compared to grid-structured 2D images, 3D point clouds are unordered and high dimensional, which is more challenging in CTTA. To address this, we specifically designed a graph attention mechanism to fully learn token sequences within a complete graph structure. This effectively captures contextual relationships and semantic features between unordered and irregular point cloud patches. --- Rebuttal Comment 3.1: Title: Remaining Concerns Comment: Thank you for your response. 1) The paper "Continual Test-Time Domain Adaptation [37]" was the first to set the CTTA problem, and subsequent works have followed its setup. In [37] Table 1, it is clearly stated that the CTTA setting involves "No Source" and "No Train stage." Therefore, I believe that in the correct CTTA setting, the source domain model's training process should not be accessed. Additionally, once the source prototypes are obtained, is it possible to infer information about the source domain data? Therefore, I suggest comparing this paper with non-source-free TTA/CTTA methods. BTW, you could directly compare it with papers that use source features, as the baselines compared in this paper do not access source features. 2) The reproduction details for paper [b] are still unclear. For example, where is the adapter injected? How is uncertainty obtained from the 3D data? --- Reply to Comment 3.1.1: Title: Responses to your concerns Comment: **Q1:** Thank you! In the CTTA setting of [37] (Table 1), "No Source" means no source data (x, y) is accessible during the test-time adaptation stage, and for 3D point cloud task, such data indicates point cloud inputs with coordinates (x, y, z) which are prohibited for use in the adaptation stage. As a matter of fact, it does not restrict the use of prototype features extracted by the source pre-trained model. We would like to state that our setting is fair since we do not use source data and we follow the common practice of [I, II, III] that source prototypes are pre-cached before deployment and stored as constant parameters alongside the pre-trained source model, and then use them for test-time adaptation. In general, the pre-cached source prototypes are derived from the source pre-trained model and can be regarded as part of the source pre-trained model. Please note [I, II, III] are three examples of this common practice, and there are many emerging in recent years. We just wish to convey that this is indeed a well-recognized practice in the community. On the other hand, our proposed PCoTTA method still shows great superiority without the use of source prototypes, to the original CoTTA [37]. Take the point cloud reconstruction task as an example, as shown by model B (without source prototypes) in Table B of the rebuttal PDF and CoTTA’s results in Table 1 of the manuscript, where CoTTA [37] achieves 58.3, 56.7, 55.2 during the 3 different rounds, while our PCoTTA achieves 36.8, 36.2, and 35.7, demonstrating superiority and the state-of-the-art performance in continual test-time adaptation for 3D point cloud. Thank you very much, and we hope the above can address your concerns. **Q2:** Sorry for the confusion. We reproduce ViDA [b] for 3D point cloud understanding, strictly following its original setup. High-rank and low-rank ViDAs are injected into all layers of the source model (i.e., the PIC pre-trained model) and scaled using scale factors. Since ViDA is a 2D image method, we apply typical 3D data augmentation like rotation and scaling for training the teacher-student model. Like ViDA, we calculate uncertainty values and scale factors using the mean and variance of model outputs over several augmentations. However, instead of using predicted probabilities in classification tasks, we employ position offsets of outputs for our point cloud understanding tasks. Furthermore, given the Chamfer Distance (CD) loss is used in our point cloud understanding tasks (essentially regression tasks), we optimize the teacher-student model using the typical CD loss as a consistency loss instead of a cross-entropy loss which is typically used in classification tasks, to keep consistency. We will add these implementation details in the revision. --- Rebuttal 4: Comment: There are serious problems with the reproduction of SANTA. Your baseline model is PIC, which is a transformer-based model, and each transformer block includes layer normalization. However, in the reproduction details, you claim, "we only update the BatchNorm layer parameters in the source model during adaptation." This reproduction and the associated experiments have significant flaws, and I also have concerns about the reproduction results for the comparison method TENT [35]. --- Rebuttal 5: Comment: Thanks for your comment. We think it may cause you a misunderstanding here. We ensured to use PIC [8] as the source model for all reproduction methods. In PIC, the model comprises the token embedding module (Encoder) and Transformer blocks, where the Encoder contains **BatchNorm** layers (shown below). Following the original works SANTA [V] and TENT [35], their BatchNorm layers are updated. This ensured fairness in reproduction. Please see the network details below. We hope this clarifies your concerns. ``` … (MAE_encoder): MaskTransformer( (encoder): Encoder( (first_conv): Sequential( (0): Conv1d(3, 128, kernel_size=(1,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv1d(128, 256, kernel_size=(1,), stride=(1,)) ) (second_conv): Sequential( (0): Conv1d(512, 512, kernel_size=(1,), stride=(1,)) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): Conv1d(512, 384, kernel_size=(1,), stride=(1,)) ) ) (blocks): TransformerEncoder( (blocks): ModuleList( (0): Block( (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (drop_path): Identity() (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=384, out_features=1536, bias=True) (act): GELU(approximate=none) (fc2): Linear(in_features=1536, out_features=384, bias=True) (drop): Dropout(p=0.0, inplace=False) ) (attn): Attention( (qkv): Linear(in_features=384, out_features=1152, bias=False) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=384, out_features=384, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) ) ) … ``` --- Rebuttal Comment 5.1: Comment: I do not have any misunderstanding about the reproduction. I checked the official code of PIC, and the model only has two Batch Normalization layers, which are in front of the transformer encoder and the transformer decoder. The BN parameters are minimal. Following previous transformer-based CTTA methods, I believe that for the reproduction of SANTA and TENT, the Layer Normalization layers should be updated. Therefore, I think the reproduction and experiment are incorrect and unfair. --- Rebuttal 6: Comment: Thanks for further comment. As per your comment, we further update the LayerNorm parameters for SANTA [V], TENT [35], and AdaBN [18]. AdaBN [18] was compared in our paper and involves BatchNorm update. The results are shown in the below table, indicating that the update of LayerNorm only improves a bit and our method still outperforms them obviously. Our method does not update the source model (including Transformer Blocks). We will discuss the above in the revision. | Rounds | 1 | | 2 | | 3 | | | :-------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | Target Domains | ModelNet40 | ScanObjectNN | ModelNet40 | ScanObjectNN | ModelNet40 | ScanObjectNN | | Methods | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | Rec./Den./Reg. | | AdaBN + LN [18] | 58.7/52.1/37.7 | 64.1/76.8/57.2 | 58.9/51.5/37.2 | 64.1/74.2/53.9 | 56.8/50.3/35.5 | 62.1/71.7/51.1 | | TENT + LN [35] | 57.9/50.6/36.8 | 64.8/76.4/55.0 | 57.8/50.0/36.7 | 64.7/73.5/51.1 | 55.2/48.4/35.0 | 62.1/69.2/49.7 | | SANTA + LN [V] | 32.3/42.1/37.8 | 44.9/55.2/38.6 | 31.7/41.9/37.4 | 42.0/53.4/35.6 | 30.1/41.6/36.4 | 40.6/52.9/34.7 | | Ours | 6.3/21.4/15.4 | 8.9/28.3/20.7 | 5.5/19.9/14.6 | 8.5/26.9/19.6 | 5.4/18.6/12.1 | 8.2/25.2/19.3 | [V]. Chakrabarty et al. SANTA: Source Anchoring Network and Target Alignment for Continual Test Time Adaptation. In TMLR 2023. [18] Li et al. Revisiting batch normalization for practical domain adaptation[J]. In Preprint ArXiv 2016. [35] Wang et al. Tent: Fully test-time adaptation by entropy minimization. In ICLR 2021. --- Rebuttal Comment 6.1: Comment: Thank you for your detailed responses. However, I still believe that the most accurate CTTA setting should not rely on any source information beyond the source model itself, including source features and tokens. This is because CTTA simulates the continual adaptation process post-deployment on the edge, where there is no opportunity to access the source model’s training phase or store source features. However, this reflects only my personal view, shared by some other researchers, and does not represent the views of all researchers. Additionally, the description of the baseline methods' reproduction in the paper is not entirely accurate or complete, and I hope the authors can address these issues. Finally, I appreciate the authors' efforts in addressing most of my concerns and conducting numerous additional experiments, and I will raise my rating to "Borderline Accept." --- Reply to Comment 6.1.1: Comment: We appreciate your time in reviewing our work and making our paper more comprehensive. Thank you for your support and for raising your rating. We will consider those points and revise our paper accordingly in the new version.
Rebuttal 1: Rebuttal: We would like to thank the AC and all reviewers for their efforts and time in reviewing our paper. We appreciate their constructive and valuable comments. We are pleased to see reviewers’ acknowledgement of the significance of our compiled point cloud CTTA dataset or new benchmark (Reviewer SkYG, Reviewer ynhY, Reviewer 5Vdz), the novelty of the method (Reviewer SkYG, Reviewer ynhY, Reviewer ni1v, Reviewer 5Vdz), good writing/organization (Reviewer SkYG, Reviewer ynhY, Reviewer ni1v), and significantly superior performance (Reviewer ynhY, Reviewer ni1v, Reviewer 5Vdz). For each reviewer, we have separately submitted a rebuttal accordingly. We addressed all concerns from each reviewer there. We will update our paper accordingly. Pdf: /pdf/9a906f09d3c8cda10b5b8753648473bab713832d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Offline Multitask Representation Learning for Reinforcement Learning
Accept (poster)
Summary: This paper investigates how representation learning in offline multitask low-rank RL can improve sample complexity when using the learned representations in downstream reward-free RL, offline and online RL settings. The paper assumes that the new task shares the same representation as the upstream tasks making up the offline data set. The paper provides theoretical results but does not provide any empirical evaluation. Rather intuitively, their result shows that learning a representation from the offline multitask data set can improve sample complexity in new downstream tasks when compared to learning the new downstream task from scratch. Strengths: Representation learning and large-scale offline data sets have shown much promise for learning policies. As such, investigating the theory of learning representations from multi-task offline data seems an important research direction. I found the related work section very beneficial for pulling different pieces of the paper together. This really helped me understand things. Weaknesses: I should start off by saying that I have little experience with the theoretical aspects of representation learning for RL and there were many aspects of this paper I had to take as a given without fully understanding. I have put a confidence of 2 to reflect his. Whilst this paper addresses an important and interesting problem, I did find it hard to follow. This is likely due to the fact I do not research theoretical aspects of RL. Nevertheless, I do think the paper should be digestible to a wider audience than it currently is. In particular, the authors could do a better job of communicating intuition for the different equations that are introduced. I will now detail some suggestions with the aim of helping the authors communicate their work more clearly. At the end of Section 3, it was not clear to me what the main result is. I would advise the authors to summarise the main result at the end of the section. This would really help make this paper an easier read. Similarly in Section 4. What is the main takeaway from this section? Equation 3.4 is introduced rather abruptly. That is, I did not understand its purpose until later in the paper when the authors stated that they wanted to improve sample complexity by improving exploration. I would advise the authors to state this reasoning when they introduce Equation 3.4. I am confused by the notation for $h \in [H]$, $t \in [T]$ and $i \in [N]$. Aren't these discrete numbers? If so, the authors should use curly braces instead, e.g. $h \in \\{0,\ldots,H\\}$ ## Minor comments and typos - Line 61 - $H$ and $d$ aren't defined yet so this doesn't make much sense. I found the contributions had way too many technical details which does not make any sense to the reader given what they've read so far. Consider shortening these and removing unnecessary technical details. - Equation 2.1 - What is $\mathbb{P}$? This is not defined until way later in the paper. - Line 42 - "We develop new" should be "We develop a new" - Line 69 - MDPs are discrete time by definition so it is confusing to state you consider discrete-time MDPs. It almost implied MDPs are not usually discrete time. - Sections shouldn't lead straight into sub-sections, e.g. Section 2 to 2.1 and Section 4 to 4.1 - You should add a few sentences explaining what you are introducing in this section. - Line 137 - "was" should be "were" - Line 152 - "we construct lower confidence" should be "we construct a lower confidence" - Line 159 - "can" should be "has" - Line 177 - "compared to concentrability coefficient" should be "compared to the concentrability coefficient" - Line 215 - "sequel" is the wrong word here. It sounds like you mean the next paper... - Line 233 - "task" should be "tasks" - Line 233 - "assume" should be "assumes" - Line 233 - "for reward-free" should be "for the reward-free" - Table 1 - It is not clear from the caption that this is for the reward-free setting. This should be made clear in the caption. - Line 299 - "such" should be "this" - Line 343 - "study the multitask RL" should be "study multitask RL" - Line 347 - "setting" should be "settings" Technical Quality: 3 Clarity: 2 Questions for Authors: - Why is $\phi$ shared and $\mu$ is not? This should be made clear in the text. - What is $\lambda$ in Equation 3.3? - Why are equations not referenced as Equation X but instead as (X)? You could be referring to an equation/section etc... Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I do not think the limitations are adequately discussed. The proofs utilise a lot of assumptions and the paper would benefit from a clear paragraph demonstrating the limitations associated with these assumptions. What are the practical implications of these assumptions? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of the reviewer's points. Thank you so much for your suggestion on how to improve the readability for a broader audience. We will definitely add more intuition for each of the different equations that are presented in the main paper. ### Main take-away for Section 3 The main take-away for the main result (Theorem 3.3) in Section 3 is highlighted in L187-193. Namely, in Theorem 3.3, Equation (3.6) provides the statistical scaling rate for the average accuracy of the estimated transition kernels of the $T$ MDPs. Equation (3.7) implies that if the optimal policy $\pi_t^*$ is covered by the offline data for all $t \in$ {1, 2, ..., T} as characterized by multitask relative condition number $C^*$, then the output policy of MORL is able to compete against it on average as well. ### Main take-away for Section 4 On high-level the main take-away for section 4 is that representation learning in upstream multitask offline dataset can help in improving the sample complexity, statistical rate of sub-optimality gap and regret bound in reward-free, offline and online RL respectively. The main take-away for Theorem 4.4 is provided in Remark 4.5. Due to space constraint in the main paper, the main take-away for Theorem 4.7 is provided in Remark K.7 (L1042-L1046) in Appendix K. Similarly, the main take-away for Theorem 4.8 is provided in Remark M.6 (L1140-L1145) in Appendix M. ### Other comments Thanks for suggesting to put the purpose of introducing Equation 3.4 earlier. We will do it in our next version. Essentially, Equation 3.4 is used for inducing pessimism in the learned policy in Algorithm 1. The notation $[N]$ indicates the set {1, 2, ..., N}. Even though it’s a standard notation, we will clarify it in our next version. Thanks for the other minor comments and pointing out some typos. We will correct and update them. ### Questions 1. By assuming $\mu$ is not shared across tasks, we ensure that the transition kernel of each task is different from one another. However, for theoretical analysis it is imperative to make an assumption that will connect all the MDP tasks in some manner. This is why we assume that the feature $\phi$ is shared across tasks. 2. Here $\lambda$ is a regularization factor in Equation 3.3. 3. Thanks for your suggestion regarding equations as Equation X instead of as (X). We will update this in our next version as per your suggestion. ### Limitations We have already discussed the assumptions made at length in L230-L241. We will add further discussion on how these assumptions can be limiting. We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. After reading the rebuttal, the other reviews, and the other rebuttal comments, I have decided to maintain my score of 5. There seems to be broad agreement amongst reviewers that this paper lacks intuition and is hard to follow. I suggest the authors take on board the reviews and rework their paper. With that said, I do think this paper has potential and I am happy for it to be published if all other reviewers believe it should. However, in its current form, I do not feel like I can give a score higher than 5. Thank you for explaining where the limitations are discussed. As both Reviewer **Qe4K** and myself overlooked these limitations, I suggest you help the reader find them, for example, name the paragraph with \paragraph{Limitations}. I'd also note that it's pretty common practice to discuss the limitations of your work in the discussion or conclusion with a clearly labelled paragraph. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their suggestions on how to make the discussed limitation more easily findable. We will incorporate these suggestions in our next version. Once again thank you so much for your time in reviewing the paper and providing meaningful suggestions on how to improve the paper.
Summary: This paper introduces the Multitask Offline Representation Learning (MORL) algorithm, which aims to enhance sample efficiency in offline multitask reinforcement learning (RL). By learning a shared representation from pre-collected datasets of different tasks, modeled by low-rank Markov Decision Processes (MDPs), the authors demonstrate the theoretical benefits of this method for various downstream RL tasks, including reward-free, offline, and online scenarios. The paper provides extensive theoretical analysis and introduces specific data-coverage assumptions, presenting the first theoretical results showing the advantages of representation learning in offline multitask RL. Strengths: 1. The paper addresses the novel problem of offline multitask representation learning in RL with a new algorithm (MORL) that incorporates innovative techniques such as joint model learning via MLE and penalty functions for pessimism. The approach is well-grounded in theory and fills a relevant gap in the literature. 2. The submission is technically sound with rigorous theoretical analysis supporting the proposed algorithm. Detailed proofs of the main theoretical results and thorough explanations of the algorithm's components are provided. 3. The paper is clearly written and well-organized, with a logical progression from problem introduction to algorithm presentation and theoretical results. The definitions and assumptions are clearly stated, and the proofs are detailed and comprehensible. 4. The results are important as they provide the first theoretical demonstration of the benefits of multitask representation learning in offline RL. The approach has potential applications in domains where collecting data online is infeasible, making it highly relevant for real-world problems. Weaknesses: 1. The empirical validation through experiments is limited, lacking comparisons with other state-of-the-art algorithms to support the theoretical claims. 2. The theoretical proofs need additional explanations or visual aids, with a high-level overview before detailed proofs to improve accessibility for a broader audience. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the performance of MORL scale with the number of tasks and the size of the state and action spaces? 2. Are there any practical considerations or limitations in applying MORL to real-world datasets that were not covered in the paper? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work, particularly in terms of the assumptions required for the theoretical analysis. However, more discussion on the potential practical limitations and how they can be mitigated would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of the reviewer's points. * As discussed in Additional Related Work in Appendix A under **Offline Data Sharing in RL**, there have been numerous empirical works that studied the benefit of using offline datasets from multiple tasks to accelerate downstream learning. The aim of this paper is to provide a framework for investigating this approach from a theoretical lens. While empirical validation is limited in this paper, we hope that the theoretical analysis presented in this work will provide insights for algorithmic design principles that can be used to design empirically better performing algorithms compared to current state-of-the-art algorithms. * Thanks a lot for your suggestion regarding improving the accessibility of the proofs for a broader audience. We will add a proof roadmap section along with a visual proof roadmap in our next version. Again, thanks a lot for the great suggestion! ### Questions 1. As can be observed from Equation 3.7, the sub-optimality gap (in average sense) of MORL does not depend on the size of the state and action space. Instead it depends on $d$ - the dimension of the feature $\phi$. Moreover, the average accuracy of the estimated transition kernels of the $T$ MDPs scales at the rate $\tilde{O}(\sqrt{\frac{\log |\Phi|}{nT} + \frac{\log|\Psi|}{n}})$. 2. In our setting we assumed that the different tasks share the feature representation $\phi$ under low-rank MDP structure. However, in most real-world datasets this might be difficult to satisfy exactly. It would be an interesting future work to study how we can extend this work to the case where the features are not exactly shared but instead shared with some perturbed approximation. Finally, we are glad that the reviewer felt that we have adequately addressed the limitations of the work in terms of the assumptions used for theoretical analysis. As per the suggestion of the reviewer, we will add more discussion on potential practical limitations of MORL along with how to possibly mitigate them in our next version. We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer fKn9 Comment: Thank you for your detailed rebuttal and for addressing the points I raised in my review. I appreciate your clarification on the scalability of the MORL algorithm, particularly that the sub-optimality gap depends on the feature dimension rather than the state and action space sizes. I have also reviewed the other feedback and acknowledge the broader concerns about the paper's clarity. Your plan to include a proof roadmap and visual aids in the revised version is a positive step that will improve readability. I believe your responses have effectively addressed my concerns, and while I will maintain my current score, I look forward to the improvements in the final version. --- Reply to Comment 1.1.1: Title: Reply to reviewer fKn9 Comment: Dear reviewer fKn9, Thank you for spending time checking our rebuttal. We promise to incorporate the suggestions you made ( a proof roadmap and visual aids) in the final version, and we authors are still available here to answer your questions until tomorrow in case you have any final questions. Best, Authors
Summary: This paper provides a theoretical analysis of representation learning in Multi-Task Offline RL. Specifically they consider a setting in which the transition kernels have a low-rank decomposition $P(s'|s,a)=\langle \phi(s'),\mu(s,a)\rangle$, such that $\mu(s,a)$ depends on the task but $\phi(s')$ is shared by all tasks. The authors propose an algorithm for this setting that first learns a transition model and then does planning on the model with a low-confidence-penalized reward. They provide a suboptimality bound, based on bounding the TV distance between estimated and true transition kernels. The authors also investigate the application of this acquired representation in down-stream online, offline and reward-free tasks. Strengths: * The paper is very well written, assumptions that are made are explained and well justified. * The exploration of down-stream applications of Multi-Task Offline RL is of large practical interest. Weaknesses: **I don't work in theory, please see the below more as minor comments.** * [89] analyzes a very similar setting but is only mentioned in the appendix as concurrent work. Strictly speaking, [89] was uploaded ot arxiv three months before the deadline, so it is not considered concurrent as per NeurIPS guidelines. Either way, it should probably be discussed more prominently and not just in the Appendix. * It would have been nice to specify which improvements in the bound are results of which decisions. For example, a conservative reward $r-b$ in planning vs the normal reward $r$, or how exactly using the low-rank assumption rather than a linearity assumption affects the analysis. Similarly it would be nice to provide an interpretation for the different terms in (3.7), but I'm also not sure if that is really possible to do in a meaningful way. * The paper treats a setting with non-stationary reward and transition functions. It's is not discussed why the paper treats this setting and what effect it has on the analysis. * The approach name and acronym MORL (Multitask Offline Representation Learning) is a somewhat confusing choice, as MORL is frequently used to denote Multi-Objective RL. * The notation is somewhat confusing at times with $t$ representing tasks, $h$ representing timesteps in the MDP, $\tau$ indexing transitions in the downstream offline dataset and $i$ indexing samples in the upstream dataset. The usage of non-stationary policies, rewards, and transition functions also does not help to reduce subscripts. * The checklist instruction part was meant to be removed before submission, i.e. the instructions in L1253,1254 Technical Quality: 3 Clarity: 3 Questions for Authors: * Why was a nonstationary setting analyzed, and how did this affect the analysis? Is it possible to achieve a better regret by considering a stationary reward and transition function? Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Assumptions are discussed sufficiently, so I have no concerns about additional limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of the reviewer's points. * Thanks for your suggestion regarding [89]. A preprint version of our results appeared publicly around at the same time of the preprint of [89]. We will provide a more detailed discussion of it and move the discussion from the Appendix to the main paper. * A conservative reward $r-b$ allows us to show near-pessimism in the average sense as depicted in Lemma 3.5. This is possible through a concentration argument for the penalty term $\hat{b}_h^{(t)}$ as defined in Equation (3.4). Under linearity assumption as in linear MDP, the representation $\phi^*$ is assumed to be known a priori. This allows one to use linear regression analysis to derive point-wise model uncertainty quantification and that can serve as a conservative penalty. However, this is not the case under low-rank assumption. It would be interesting to see how we can interpret each term in Equation 3.7. But as the reviewer mentioned, it is unclear how to do it in a meaningful way. Our motivation for representing Equation 3.7 in this way was to provide a fine-grained view for each term. * We considered non-stationary reward and transition functions in our work as it is a standard practice in literature that assumes episodic MDP with finite horizon that have either low-rank MDP or linear MDP structure. Without non-stationarity, the dependency on $H$ would change in the resulting bounds. Indeed, with stationary reward and transition function one can achieve a better regret bound in terms of $H$ dependency. * Thanks for pointing out potential confusion with the acronym MORL. We will try to come up with an alternative acronym in the next version. * We totally understand that it can be challenging to follow the notations sometimes as there are so many moving parts involved that add up in the form of subscript and superscript. We tried our best to keep things concise and clear as much as possible. We would be happy to incorporate any suggestion that the reviewer might have to make the notations easier to follow. We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their additional explanation, especially of the assumptions made in the problem setting and their consequences. I will raise my score from 6 to 7, as this paper seems useful and novel to me. However, I am keeping my confidence at 1, as I am unfortunately not very familiar with this area. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank the reviewer for their positive review and increasing the score.
Summary: This paper proposes a representation learning algorithm for offline multitask reinforcement learning. The proposed algorithm, MORL, is designed for offline multitask RL in low-rank MDPs. The learned representation is examined in downstream RL in reward-free, offline, and online scenarios. Strengths: 1. Detailed theoretical analysis of the proposed algorithm. 1. Better sample complexity than existing algorithms. Weaknesses: I do not conduct research in theoretical offline RL, so I cannot provide an accurate evaluation for this paper. I will try to provide some comments based on my understanding. 1. This paper lacks a comparison with other multitask RL algorithms. Table 1 only compares it with single-task methods. Section 4.3 does not provide any comparison with other algorithms. 1. The contributions to downstream online and offline RL are not clear. Minor issues: 1. Equation (3.2) should be $P(s'|s,a)$... where $s'$ is missing. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In line 295 in Section 4.3, why do you assume that reward function $r^{T+1}$ is unknown? 1. Could you provide any experimental results to support the claims in the paper? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper lacks a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and effort in providing detailed feedback on our work. We hope our response will fully address all of the reviewer's points. ### Points raised as weaknesses 1. To our knowledge, there is only one other concurrent offline multitask RL theory work [89]. We provided detailed comparison with that work in Appendix A: Additional Related Work. However, as described in our discussion the theoretical results of [89] are not directly comparable to our theoretical results. Table 1 only compares with single-task methods as there is no existing work that studies reward-free RL in multi-task settings. The goal of Table 1 is to show that performing offline multitask representation learning in the upstream task, can help improve sample complexity of downstream reward-free tasks compared to single task counterparts. For offline and online downstream tasks we provided comparison with other algorithms in Remark K.7 (Appendix K.2) and Remark M.6 (Appendix M.2). We provided them in the appendix due to space constraints in the main paper. 2. As we mentioned in the Introduction section where we highlighted our contributions, for the downstream part, our core contribution is in the reward-free setting. We provided results for offline and online downstream settings as complementary results. ### Questions 1. In Section 4.3, for downstream offline and online RL tasks, we assume the reward function $r^{T+1}$ to be unknown. We made this assumption to make our result more general. Note that, knowing the reward function $r^{T+1}$ would have made the downstream task easier to solve. 2. As discussed in Additional Related Work in Appendix A under **Offline Data Sharing in RL**, there have been numerous empirical works that studied the benefit of using offline datasets from multiple tasks to accelerate downstream learning. The aim of this paper is to provide a framework for investigating this approach from a theoretical lens. While empirical validation is limited in this paper, we hope that the theoretical analysis presented in this work will provide insights for algorithmic design principles that can be used to design empirically better performing algorithms compared to current state-of-the-art algorithms. ### Limitations As all other reviewers have acknowledged, we have addressed limitations of our works, especially through a thorough discussion of the assumptions made for the theoretical analysis. Moreover, as per the suggestion of the Reviewer fKn9, we will add more discussion on potential practical limitations of MORL along with how to possibly mitigate them in our next version. We hope we have addressed all of your questions/concerns. If you have any further questions, we would be more than happy to answer them and if you don’t, would you kindly consider increasing your score? --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. My concerns are addressed in the rebuttal. I have raised my score to 5 accordingly. --- Reply to Comment 1.1.1: Title: Reply to reviewer Qe4K Comment: Dear reviewer Qe4K, Thank you for checking our rebuttal. We authors are still available here to answer your questions until tomorrow in case you have any last-minute questions. Best, Authors
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DiffGS: Functional Gaussian Splatting Diffusion
Accept (poster)
Summary: This paper presents a method to turn the representation of the gaussian splatting as a point cloud into a VAE representation. With that representation it is shown how to perform several tasks such as unconditional generation, conditional generation, and point-to-gaussian generation, all of which are done using a diffusion model that operates on the latent . It is shown that it performed better than previous methods on these tasks. Strengths: The method presented in the paper is a novel way to incorporate the gaussian splatting attributes into generation tasks. The method shows better results on some tasks than previous methods. Weaknesses: The main one is that while reading the paper I did not get the answer to the question why enforcing a VAE to predict gaussian splatting should work better than a simpler method. It was shown that this is the case by experiments, but what is the motivation? As SOTA NeRF methods produce better reconstruction than the vanilla GS. Since the paper presents a method to create a new 3d object from the gaussian splatting primitives the relevant papers from the early (1990-2002) use of gaussian splatting should be referred to. Additionally the number of quantitative results is limited (even in comparison to the paper of the referred baselines), and makes it hard to evaluate the method. Moreover, the second part of the first sentence in the abstract “... yet the generation of Gaussian Splatting remains a challenge due to its discreteness and unstructural nature.” is IMHO logically challenging, only after reading the introduction it becomes clear that the authors referred to generation novel 3d assets using gaussian splatting. The main concepts and ideas in the paper are interesting, but there is still work required in both writeup and experimental parts. Technical Quality: 2 Clarity: 1 Questions for Authors: see weaknesses. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: It is with great appreciation that we acknowledge the insightful feedback from Reviewer TmLe. We have addressed the questions and comments with careful consideration, and we encourage continued conversation to ensure the robustness of our findings. **Q1: Motivation, Literature, and Experimental Evaluation** Variational Autoencoders (VAEs) are well-suited for capturing complex distributions in a lower-dimensional latent space, which encodes diverse data into a compact latent representation, allowing for more efficient and structured generation of 3D models. In comparison to Neural Radiance Fields (NeRF), 3D Gaussian Splatting presents multiple benefits, especially regarding rendering efficiency and superior visual fidelity. Addressing the challenge of 3DGS generation is both timely and essential for the progression of the 3D modeling domain. The revised abstract will explicitly state the novelty and motivation behind using Gaussian splatting for novel 3D asset generation, emphasizing its significance in overcoming the limitations of discreteness and structural complexity. We carried out further experiments to compare our approach with the state-of-the-art baseline methods, such as EG3D (CVPR 2022), SSDNeRF (ICCV 2023), Shap·E, SplatterImage (CVPR 2024), TriplaneGaussian (CVPR 2024), LGM (ECCV 2024), and DreamGaussian (ICLR 2024). As shown in Figure A, Figure B and Figure D of the rebuttal PDF, our method shows more refined geometry and detailed textures. Besides, we compare text consistencies with the other SOTA methods in Table A of the rebuttal PDF. Our method demonstrates a competitive alignment between the text descriptions and the resulting 3D models. Besides, we provide a comparison of model parameter counts and generation times in Table B of the rebuttal PDF. DiffGS has significantly fewer parameters than LGM, resulting in reduced memory usage and computational overhead. Moreover, our approach achieves faster generation times than other state-of-the-art 3D generative models, demonstrating its efficiency in creating high-quality 3D content. --- Rebuttal Comment 1.1: Title: Thanks for the answers. We'll keep the rating. Comment: More efforts and motivation is required.
Summary: This paper proposes DiffGS, a novel diffusion-based generative model that can generate 3D Gaussian Splatting (3DGS) representing an object. As 3DGS is naturally discrete and unstructured, it is not practical to directly diffuse the 3DGS representation. Thus, the authors propose to reconstruct the 3DGS with structured triplanes that represent three functions: a probability function that models the geometry (the mean of Gaussians) of 3DGS, a color function that models the color, and a transform function that models the transformation parameters of the Gaussians. Additionally, the authors propose a VAE that encodes 3DGS into a latent vector. By representing 3DGS with such structured data, it is then possible to train a diffusion model on the latent vectors, which can be recovered to 3DGS for rendering. Experiments show that the proposed DiffGS achieves preferable quality results compared to baselines and by adding conditions to the diffusion model, it can achieve various applications. Strengths: 1. The paper is well-written and easy to follow. 2. The idea of representing 3DGS with disentangled functions is novel and practical. Indeed, it is difficult to diffuse unstructured data like 3DGS, and representing 3DGS using functions that can both encode and decode is a novel direction. Weaknesses: 1. It can be observed that there might be holes and incomplete shapes in the generated 3DGS (Fig.5(b)). I suspect the reason for this is that the proposed Gaussian extraction algorithm struggles to sample points in high-frequency parts like the legs of chairs. While this can be improved by increasing the number of sampled Gaussians or the depth of the octree, it might be quite computationally intensive. 2. The inference efficiency is not shown in the experiments. During each sample, DiffGS requires optimization over the sampling points, which may further slow down the inference speed of the model. The authors should provide the influence of the optimization on the inference time. However, it is acceptable if the inference speed is slower than baselines since diffusion models are naturally slow to inference. 3. An important baseline is missing. Splatter Image [1] is also a 3DGS-based object reconstruction method and shares the similar idea of representing unstructured 3DGS with structured data. Since the Splatter Image models trained on ShapeNet are also available, it is reasonable to compare DiffGS with Splatter Image under the single-view reconstruction task. 4. The authors do not provide the number of parameters of DiffGS and the comparisons with baselines. In the quality comparisons, the authors should keep DiffGS and other baselines having around the same number of parameters. [1] Splatter Image: Ultra-Fast Single-View 3D Reconstruction, CVPR 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the inference time of DiffGS, and how much does the optimization process during inference influence the inference speed? 2. How many parameters does the model use? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere thanks to Reviewer cAhv for their comprehensive review and valuable suggestions. In response to the concerns raised, we have offered detailed explanations in the subsequent section, and we are eager to engage in further dialogue. **Q1: Incomplete Shapes in 3DGS** It is indeed possible that some models exhibit holes and incomplete shapes, especially in regions with high-frequency details. This issue arises from challenges in capturing fine details during the Gaussian extraction process. One straightforward approach to improve this is to increase the number of sampled Gaussians. By doing so, we can achieve a higher resolution representation, capturing more details in the high-frequency regions. Similarly, increasing the depth of the octree can enhance the model's ability to resolve fine details. We aim to balance quality and computational efficiency. While increasing the number of Gaussians and octree depth improves detail capture, it also escalates computational costs. We focus on a balance between satisfactory results and inference efficiency,, and find it acceptable to occasionally generate shapes with minor artifacts. **Q2: Inference Efficiency and Optimization Impact** During Gaussian extraction, DiffGS performs optimization over the Gaussian positions to enhance the quality and accuracy of the generated 3D Gaussian Splatting geometries. This optimization step, while essential for ensuring high-quality outputs, does introduce additional computational overhead. We conducted supplementary experiments to analyze the impact of the number of Gaussian points on optimization time. The results of this ablation study are presented in Table C of the rebuttal PDF. In our experiments, we selected a configuration of 350K Gaussian points. This choice balances quality and computational efficiency, providing a robust representation of the model without excessively increasing processing time. The optimization time for this configuration is approximately 2.5 seconds. We also compared our model's generation time with other state-of-the-art 3D generative methods. The results are shown in Table B of the rebuttal PDF, where we highlight that our method demonstrates significant efficiency compared to the SOTA baselines DiffTF, ShapE, SSDNeRF and DreamGaussian, despite the added optimization step. DiffGS also offers competitive generation time with GET3D and LGM. **Q3: Comparison with Splatter Image Method** For a comprehensive comparison with Splatter Image, we evaluated both methods under the single-view generation task. We conducted supplementary experiments to compare the visual quality of reconstructions generated by DiffGS and Splatter Image. The results are presented in Figure A of the rebuttal PDF. The visual comparison indicates that DiffGS produces higher fidelity reconstructions with better handling of fine details and fewer artifacts compared to Splatter Image. **Q4: Number of Parameters and Baseline Comparisons** We have included the number of parameters for DiffGS and other baseline methods in Table B of the rebuttal PDF. As shown in the table, DiffGS has a significantly smaller number of parameters compared to most baseline methods. The smaller parameter count of DiffGS translates into reduced memory usage and potentially faster inference times, which can be advantageous in environments where computational resources are limited. Despite having fewer parameters, DiffGS demonstrate superior performance, demonstrating that its architecture efficiently captures essential features for high-quality 3D reconstruction.
Summary: The paper is about generating 3D objects by using gaussian splatting. The method converts the points (with gaussian properties) to continuous fields. With this idea, the irregular structured data can be easily processed by neural networks. Strengths: The idea is interesting and novel. Usually we need many points (10k-1M) to represent a single object. With this large point clouds, it is very challenging to process the data with neural networks. Thus the paper proposed to convert a point cloud to a field which can be easily processed by neural networks. The octree design seems to be also interesting. This is useful to get fine-grained structure. Weaknesses: The results shown in the supplementary video is not very satisfying. Even the paper showed some metrics comparing to some existing methods, the visual quality is much worse. Specifically, the metrics shown in Table 1 are better than some prior works. However, I believe some important works are missing in the table, e.g., EG3D, DiffRF, SSDNerf. The evaluation protocol seems to be different from these works. I am curious how the authors pick 50k images as the reference set. SSDNerf used testing set as the reference set. I know there is always a debate between training and testing set in the evolution of generative models. But this should be mentioned in the paper. For the visual quality, results of EG3D and DiffRF are much better than this paper. It would be better the authors can do a visual comparison with these works. Technical Quality: 2 Clarity: 3 Questions for Authors: How do the authors optimize the loss in Eq 6? I understand how the point are converted to fields thus we can query the probabilities continuously in the space But how about other properties? They are not converted to continuous fields. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the insightful review provided by Reviewer 6WmW. Your thorough analysis and constructive feedback have significantly enhanced the quality of our work. **Q1: Visual Quality and Evaluation Protocol** We justify that DiffGS achieves SOTA performance in terms of both numerical and visual qualities. In response to the reviewer's comments, we conducted additional experiments comparing our method to the SOTA baseline methods including EG3D(CVPR 2022), SSDNeRF(ICCV 2023), Shap·E, SplatterImage (CVPR 2024), TriplaneGaussian (CVPR 2024), LGM(ECCV 2024) and DreamGaussian(ICLR 2024). We are unable to compare our method with DiffRF since it is not open-sourced. However, we have compared our method against other stronger baselines such as DiffTF and SSDNeRF, which have been shown to outperform DiffRF in various benchmarks. The visual comparison results are presented in Figure A, Figure B and Figure D of the rebuttal PDF. The results show that our method outperforms the SOTA baseline in capturing complex details while preserving geometric consistency. Additionally, we evaluate text consistencies against other state-of-the-art methods in Table A of the rebuttal PDF. Our approach shows strong alignment between textual descriptions and the generated 3D models. Furthermore, Table B highlights a comparison of model parameter counts and generation times. DiffGS employs far fewer parameters than LGM, leading to lower memory consumption and reduced computational demands. Additionally, our method achieves quicker generation times than other top 3D generative models, underscoring its efficiency in producing high-quality 3D content. For our evaluation, we selected a diverse set of 50K images as the reference set. This selection was designed to reflect a wide range of scenarios within the dataset. We used the testing set as the reference set for rendering. This approach is aligned with practices like those used in SSDNeRF, ensuring that our evaluations are based on unseen data and provide a realistic assessment of model performance. By choosing the testing set as our reference, we prioritize demonstrating the model's ability to generalize and generate high-fidelity outputs in unseen scenarios. This decision aligns with the goals of ensuring robust evaluation and fair comparison across models. **Q2: Optimization of Loss in Equation 6** We compute the loss directly against real Gaussians, which serves as the ground truth for optimization. This approach ensures that the model aligns with actual data distributions effectively. Each property of the Gaussians, including position, color, and shape, is represented as a continuous field in the spatial domain. This allows for precise querying and supervision with the real Gaussians. --- Rebuttal Comment 1.1: Title: reply Comment: Overall, I am positive about the methodology. My concern is about the results. The additional results of EG3D shown in the rebuttal seem to be much worse than the original paper. It is still unclear how to build the continuous version of other Gaussian properties (colors, rotations, etc) from the context. It looks like the only way is to use spatial interpolation with some kind of spatial kernels like RBFs. --- Reply to Comment 1.1.1: Title: Response to Reviewer 6WmW (1/2) Comment: Dear Reviewer 6WmW, Thanks for your response and the positive assessment on our methodology. We response to each of your additional questions below. Please let us know if there is anything we can clarify further. **Discussion-Q1:The performance of EG3D** The visualization results of EG3D, as shown in Fig.D of the rebuttal PDF, were directly generated using the official code and pretrained models provided by its authors. For a fair comparison across all methods, we present the randomly selected samples of EG3D in Fig.D of the rebuttal PDF. We would like to justify that the performance of EG3D is not consistently as good as depicted in their paper, where only a few visually appealing generated cars are showcased. We also refer the reviewer to the visualization results of EG3D in Fig.5 of the DiffTF paper, where EG3D similarly produces results that are much worse than those reported in the original EG3D paper. The third-party reproduction by the DiffTF authors aligns closely with our reproduction of EG3D. Furthermore, the quantitative comparisons in Table 2 of the DiffTF paper clearly demonstrate that GET3D and DiffTF outperform EG3D in terms of generation quality, while DiffGS significantly surpasses both GET3D and DiffTF, as demonstrated by all the comparisons in Sec.4 and Sec.E of the Appendix. To further highlight the superior performance of DiffGS, we conducted extensive experiments during the rebuttal period, comparing our method against several state-of-the-art baselines, including EG3D (CVPR 2022), SSDNeRF (ICCV 2023), Shap-E, SplatterImage (CVPR 2024), TriplaneGaussian (CVPR 2024), LGM (ECCV 2024), and DreamGaussian (ICLR 2024). The results demonstrate that DiffGS achieves visual-appealing generation results and outperforms all the SOTA baselines. --- Reply to Comment 1.1.2: Title: Response to Reviewer 6WmW (2/2) Comment: **Discussion-Q2:The continuity of Gaussian properties (colors, rotations, etc)** We appreciate the reviewer's insight on the continuity of Gaussian properties (e.g. colors, rotations). We will seperately discuss all the Gaussian properties here. 1) **Probability.** As acknowledged by the reviewer, the Gaussian probabilities are indeed continuous in space. 2) **Color.** As detailed in Sec.3.1, the Gaussian colors are modeled using a series of spherical harmonics coefficients. Actually, these coefficients tend to be similar for nearby Gaussians in space, as they often share similar colors. For instance, the wings of an airplane typically have consistent coloring, resulting in only slight variations in the spherical harmonics coefficients for nearby Gaussians, which in turn creating a continuous color field. At the junction of two different colors (e.g., between the wing and fuselage of an airplane), the color attributes transition smoothly from one to the other by learning a continuous change in the spherical harmonics coefficients, enabled by the powerful variational auto-encoder network. 3) **Scale.** We acknowledge that scale is the most challenging property to learn in a continuous field. To address this issue, we implemented a regularization strategy, as detailed in Sec.B.1 (Gaussian Splatting Data Preparation) of the Appendix. Specifically, we observe that optimizing 3DGS freely may often lead to some extremly large Gaussians. This will lead to unstable training of the Gaussian VAE and latent diffusion models, further affecting the generative modeling results. Therefore, we clip the scales at a maximum size of 0.01 to avoid the abnormal Gaussians. Though the simple regularization on Gaussian scales, DiffGS is then capable of learning continuous scales. 4) **Rotation.** In practice, we found that the continuous learning of Gaussian rotation is strongly related to the learning of Gaussian scale. We observed that when applying the effective regularization strategy for Gaussian scales, Gaussian rotations can also be learned continuously during Gaussian optimization and VAE training. This observation is further supported by the success of TriplaneMeetGaussian in continuously modeling Gaussian rotations, where a similar regularization is applied. While this detail is not explicitly mentioned in the TriplaneMeetGaussian paper, we found evidence of this approach in their code. If the reviewer is interested, please refer to L169-172 of `TriplaneGaussian/tgs/models/renderer.py` in the code of TriplaneMeetGaussian. The key parameter `cfg.clip_scaling` which controls the scale clipping threshold can be found in L148 of `TriplaneGaussian/config.yaml` in the code. 5) **Opacity.** The nearby Gaussians contains similar opacities during the Gaussian optimizations. Therefore, the opacities of nearby Gaussians change continuous, similar to the colors. We are deeply grateful for your invaluable feedback and the time you dedicated to evaluating our work. Your comments and expertise are sincerely appreciated. Best regards, Authors
Summary: This paper proposes a new generative model for Gaussian primitive generation based on latent diffusion models. In detail, the method disentangles Gaussian Splatting generation into three functions, i.e., Gaussian probabilities, colors, and transforms. The method can achieve unconditional and conditional generation and extract Gaussians at arbitrary numbers via octree-guided sampling. Experiments on unconditional generation, conditional generation from text, image, and partial 3DGS, as well as Point-to-Gaussian generation, show the advances of the proposed method. Strengths: 1. DiffGS effectively deals with the discreteness of Gaussian Splats, by disentangling Gaussian Splatting into Gaussian probabilities, Gaussian colors, and Gaussian transforms. 2. DiffGS shows good performances on multiple tasks, including unconditional/conditional generation from text, image, and partial 3DGS, as well as Point-to-Gaussian generation. 3. DiffGS is able to generate high-quality Gaussian primitives at arbitrary numbers. 4. The paper is well-written and easy to follow. Weaknesses: 1. What is the benefit of predicting features in structural triplanes first, then extracting 3D Gaussian Splatting, compared to directly modeling the 3D using triplanes? The generation speed is not an advantage, and the performances are not compared/ablated in the experiments. 2. Are Gaussian probabilities work better than directly predicting point cloud explicitly? To show the advantage of the GS disentanglement proposed in this paper, it would be better to compare it with previous work, e.g., [1]. 3. Experiments can be strengthened by comparing with optimization-based methods for generating GS, e.g., [2], and GS prediction from multi-view Diffusion Models, e.g., [3]. 4. Mathematics. * $i$ and $j$. In lines 131-132, why "$j=1$" and "$j$-th" are used for $g_i$? Similar for the query points definition. * Eq. 3 typo. * Does bigger $\psi_{pf}$ mean larger probability? If so, do we want to maximize Eq. 9? Please double-check the math and add the necessary explanations. --- References: [1] Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. CVPR 2024. [2] DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation. ICLR 2024. [3] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the weaknesses section for more details. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the comprehensive feedback and time that reviewer tnRu dedicated to evaluating our work. Below, we provide responses to each of your questions. We look forward to your further comments on our responses. **Q1: Benefits of Structural Triplanes in 3D Gaussian Splatting** The triplane structure utilizes 2D planes, which allows for the integration of standard 2D neural networks. This integration provides several benefits, including access to well-established 2D processing techniques and efficient computational frameworks. Triplanes serve as a bridge to utilize the generalization capabilities of diffusion models in 3D generation. By predicting features in triplanes first, we can more effectively capture and model Gaussian distributions. Directly modeling 3D with triplanes may not leverage the full potential of these distributions, especially when complex geometries are involved. Also, triplanes allow for better handling of spatial information, facilitating the reconstruction of detailed and accurate 3D representations. We conducted an ablation study where the triplane structure was replaced with a Multi-Layer Perceptron (MLP) to evaluate its effectiveness in modeling Gaussians. The results, visualized in Figure C of the rebuttal PDF, indicate that using an MLP fails to capture the intricacies of Gaussian modeling adequately. The triplane approach demonstrates superior performance in terms of detail accuracy and geometric fidelity. We further provide the efficiency comparison with other SOTA baselines in 3D generation, as shown in Table B of the rebuttal PDF. DiffGS signicantly outperforms other baselines in terms of generation efficiency. Although generation speed is not our sole advantage, our method effectively balances speed and quality, underscoring the strengths of the triplane approach in delivering high-fidelity results efficiently. **Q2: Advantage of Gaussian Probabilities Over Direct Point Cloud Prediction** Predicting Gaussian probabilities allows us to extract Gaussians at arbitrary numbers. This flexibility is a significant advantage over directly predicting point clouds, which are typically fixed in number and less adaptable to varying levels of detail. Moreover, direct generation of unstructured Gaussians poses challenges due to their non-structured nature. By disentangling GauPF, GauCF, and GauTF, we create a framework that systematically handles Gaussian attributes, allowing for a structured representation that addresses these challenges. During rebuttal, we directly compare our method with "Triplane Meets Gaussian Splatting" (CVPR 2024). The visual comparison results are presented in Figure A of the rebuttal PDF, illustrating the superior visual quality achieved by our approach. Our method consistently produced more accurate and detailed reconstructions compared to the baseline. By disentangling the Gaussian attributes, our approach allows for more efficient modeling of 3D structures, ensuring that both geometric and color details are preserved with high accuracy. **Q3: Comparison with Other Methods** We conducted comparative experiments focusing on two critical tasks: text-to-3D generation and single-view 3D generation. These tasks are central to demonstrating the flexibility and robustness of our approach in different application scenarios. The visual results of our comparisons are presented in Figure A and Figure B of the rebuttal PDF. These figures illustrate the superior visual quality and fidelity achieved by our method compared to the SOTA baseline methods including Shap·E, SplatterImage (CVPR 2024), TriplaneGaussian (CVPR 2024), LGM(ECCV 2024) and DreamGaussian(ICLR 2024). Our approach consistently produced more detailed and accurate generations, effectively capturing intricate textures and geometries that are often challenging for the compared optimization-based and multi-view based methods. We also evaluated the models using the CLIP score, a metric that measures the semantic alignment between the generated 3D models and the input conditions. The results are presented in Table A of the rebuttal PDF. According to Table A, our method achieved higher CLIP scores than both DreamGaussian and LGM. This indicates that our approach more faithfully adheres to the condition inputs. **Q4: Mathematical Notation and Clarifications** We appreciate the reviewer's careful examination on the mathematics. The use of $j$ was indeed incorrect and should be replaced with $i$ to maintain consistency in indexing. Each Gaussian $g_i$ should be consistently indexed with $i$, not $j$. We will correct any typographical errors in Equation (3), ensuring that it accurately conveys the intended mathematical relationship. $\psi_{pf}$: This function represents a probability-related component within our model, implying that larger values of $\psi_{pf}$ correspond to higher probabilities. And We will revise Equation (9) to correctly reflect the maximization intention. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts to address the concerns. Here is one point that needs further clarification: as mentioned in Weakness 1, an ablation study comparing the proposed method and the one that only uses triplane to predict color and density would be beneficial to show the advantage of the proposed approach. --- Rebuttal 2: Comment: Dear reviewer tnRu, Thank you for your response and the helpful comments. Following your suggestions, we conduct an additional ablation study under the same experimental settings outlined in Figure C of the rebuttal PDF. Specifically, we directly model the Gaussian attributes (e.g. color, density) using triplane and evaluate its performance in terms of PSNR, SSIM, LPIPS metrics, and inference time. The results are presented in Table E below. As shown, while the approach that directly utilizes triplane for Gaussian modeling without feature predicting and decoding slightly improves the inference speed, it leads to significant degradation on the quality of 3D Gaussians. In contrast, our proposed method demonstrates superior performance in terms of PSNR, SSIM, and LPIPS, compared to this alternative design. The results demonstrate the effectiveness of our triplane framework designs and the key insight to disentangle Gaussian splatting through three novel functions. **_Table E: Ablation studies on the framework designs of the triplane._** | | PSNR | SSIM | LPIPS | Inference Time (s)| | :-----------: | :-------: | :--------: | :--------: | :------------: | | Only Triplane | 23.55 | 0.9732 | 0.0340 | **9.1** | | Ours | **34.01** | **0.9879** | **0.0149** | 9.5 | Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thank the authors for the detailed reply. For the "Only Triplane" ("directly model the Gaussian attributes (e.g. color, density) using triplane") experiment, is there a NeRF decoder used after the triplane, like in EG3D and SSDNeRF? If not, would it be possible to also compare with this setting? --- Rebuttal 3: Title: Response to Reviewer tnRu Comment: Dear Reviewer tnRu, Thanks for your response and the positive assessment on our rebuttal. We appriciate your suggestions on the potential comparison with the use of a NeRF decoder (e.g. EG3D, SSDNeRF). In response, we conduct additional experiments where we replace our Gaussian functions with the SSDNeRF decoder to predict attributes of colors and density, maintaining all other settings constant. The results of this experiment are represented in Table F below. As shown, the SSDNeRF decoder results in a noticeable decline in PSNR, SSIM, and LPIPS metrics. The results demonstrate that our proposed Gaussian functions contribute significantly to the 3D rendering quality, showcasing the effectiveness of our functional decomposition of Gaussian representation and designs on the triplane framework. **_Table F: Ablation studies on the design of the decoder._** | | PSNR | SSIM | LPIPS | | :-------------: | :-------: | :--------: | :--------: | | SSDNeRF Decoder | 28.74 | 0.9838 | 0.0218 | | Ours | **34.01** | **0.9879** | **0.0149** | We further demonstrate that Gaussian splatting provides a more efficient solution for high-fidelity 3D rendering compared to volume-based methods like NeRF. By adopting a point-based representation for optimization and rendering, it reduces computational demands and accelerates the convergence of the training process. Most importantly, 3D Gaussian splatting significantly enhances rendering speed, making it particularly well-suited for real-time applications such as virtual reality and interactive 3D simulations. In practice, the rendering time for each frame using 3D Gaussian splatting can be less than 0.01 seconds, compared to several seconds for NeRF. We are deeply grateful for your invaluable feedback and the time you dedicated to evaluating our work. Your comments and expertise are sincerely appreciated. Please let us know if there is anything we can clarify further. Best regards, Authors
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers for their invaluable feedback and the time they spent evaluating our work. We are delighted that the reviewers recognized the representation and importance of our paper. We respond to each reviewer individually, providing comprehensive analyses, visualizations, and ablation studies to address all the questions raised. We have uploaded a rebuttal PDF containing experimental results and visualizations. In the following responses, we refer to this document as the "rebuttal PDF," as in "in Table A of the rebuttal PDF." Below, we address some of the common questions found in the reviews. **Global-Q1: Evaluation Against Other SOTA 3D Generative Models** We conduct additional experiments comparing DiffGS to other recent 3D generative models on text-to-3D and image-to-3D generation. Table A of the rebuttal PDF highlights the competitive performance of our model in text and 3D alignment tasks. As depicted in Figure A and Figure B of the rebuttal PDF, our model can produce high-quality generations with more consistent geometry and more detailed textures. DreamGaussian tends to generate overly saturated images and is affected by the Janus problem. Shape-e faces challenges in producing semantically accurate and complex geometries. LGM generates multi-view images from text or a single viewpoint and reconstructs 3D Gaussian distributions. However, inconsistencies in the generated multi-view and the limited number of output Gaussians often result in inaccurate geometric reconstructions and lower-quality rendered results. In contrast, DiffGS introduces a novel disentangled representation of 3D Gaussian Splatting using three functions—Gaussian Probability Function (GauPF), Gaussian Color Function (GauCF), and Gaussian Transform Function (GauTF). This representation allows us to generate high-quality Gaussians at arbitrary numbers, resulting in sharper and more detailed textures. **Global-Q2: Addressing Post-Optimization Steps** The post-optimization process in our method is designed to refine and enhance the accuracy of the generated 3D Gaussian splatting by optimizing the proxy points to the exact Gaussian centers with the largest probabilities. This step ensures the high fidelity and details of the final generated Gaussians. We present the overall generation time in Table B of the rebuttal PDF and the optimization time for various Gaussian quantities in Table C of the rebuttal PDF. The results show that for generations with a smaller number of Gaussians, e.g., 50K, the optimization is extremely fast, taking only 0.64 seconds to converge. For high-quality Gaussian generations with 350K primitives, the optimization time increases to 2.5 seconds, which is still efficient. **Global-Q3: Balancing Computational Cost and Quality Enhancement** The increase in computational cost with additional Gaussians and octree depths is primarily due to the increased complexity in handling more data points and subdivisions. To address the trade-off, we conducted a series of experiments in both Table C and Table D of the rebuttal PDF to identify the optimal balance between computational cost and reconstruction quality. The results indicate that a moderate increase in Gaussians or octree depths can significantly improve the Gaussian quality with minimal additional cost. For instance, lifting the number of Gaussians from 50K to 350K led to a 20% increase in PSNR while only increasing computation time about 1.8 seconds. --- Thank you again for your insightful feedback and we are looking forward to continuing the discussion Pdf: /pdf/630d61dae52d9286eba1df021505585aad1f3aba.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This article first transforms 3DGS into a regular representation: the triplane structure, to separately model the location probability, color, and other attributes of each Gaussian. This representation can then be used to train a generative model through a VAE and LDM. To sample Gaussian positions from the triplane structure, this work uses an octree to partition the space based on the magnitude of the location probability. It initializes the Gaussian point positions by sampling from the nodes with higher probability, and then further optimizes the final positions. Strengths: Strength: 1) This paper is well written and easy to understand 2) This design supports versatile downstream tasks: test/image conditional GS generation; GS completion; Point-to-GS Generation Weaknesses: Weakness: 1) This work is only trained on a small dataset. Similar works such as "triplane meets gaussian splatting" are trained on the Objverse dataset, which can visualize more diverse results. This might be due to the cumbersome data processing, and will limit the scaling up in the future. 2) The equation (3) is a little weird. Why the choice of \tau matters as shown in the ablation study? 3) The 3DGS representation is not properly evaluated. Given an existing 3DGS, is it possible to be reconstructed by firstly represented as GauPF, GauCF, GauTF and then recover by Octree-Guided geometry sampling? 4) Personally, I'm not sure whether 3DGS is a suitable representation for 3D generation since it's highly irregular, which cannot be embedded into a generative model conveniently. Moreover, the data processing is really time-consuming, which will further limit the large-scale training. 5) More comparison with GS-based reconstruction model should be added, such as "LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation" Technical Quality: 3 Clarity: 2 Questions for Authors: 1) It seems the optimization target in equation (9) is wrong. Should it be -log(\psi(\rho)) Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The author has shown some failure cases and limitation of this papar. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful to the reviewer 3Kid for the thoughtful feedback and time invested in evaluating our work. We address each question below. **Q1: Dataset Size and Scalability** We justify that data fitting is a common step for most generative models. For example, DiffRF requires voxelized radiance field fitting during data processing, which may take hours for each shape. NDF [1] and DiffTF require SDF-based and NeRF-based triplane fitting for data preparation. In comparison, our method only takes about 3 minutes to fit the Gaussian representation of an object using a single NVIDIA RTX 3090 GPU. This efficiency demonstrates the capability of our approach to handle 3D data with minimal computational overhead. During the rebuttal period, we faced time constraints that limited our ability to process larger datasets such as Objaverse. Our current focus was on demonstrating the model's capabilities within the constraints of available resources and time. The inherent efficiency of our method makes it well-suited for scaling up to larger datasets like Objaverse in the future. **Q2: Equation (3) and the Choice of $\tau$** In the paper, we have provided a comprehensive explanation of the role of $\tau$ and why its selection is critical, as demonstrated in our ablation study (Table B). By controlling the truncation of the GauPF, $\tau$ helps concentrate the learning process near the surface of objects. The truncation strategy prevents the learning from being influenced by distant regions in space, which are less relevant for accurate surface representation. Besides, in our ablation study, we explored both exponential and linear mappings for distance-to-probability transformations. While both methods are commonly used, our results showed that linear mapping performances better than exponential mapping in our framework. The linear mapping approach with a truncation strategy resulted in more stable and accurate training outcomes, highlighting the significance of selecting an appropriate distance-to-probability mapping technique for the task. And the ablation study identifies a suitable $\tau$ that balances sensitivity and specificity, ensuring efficient learning of the object's surface while maintaining overall model stability. **Q3: Evaluation of 3DGS Representation** During rebuttal, we further evaluate our 3DGS representation by randomly selected two airplanes from the dataset and modeled each using GauPF, GauCF, and GauTF. The results are shown in Figure C of the rebuttal PDF, which provide visual comparisons of the original and reconstructed objects, showcasing the accuracy and effectiveness of our approach in capturing intricate details. In Figure C, we also report PSNR scores, indicating a strong ability to reconstruct the original 3DGS with high fidelity. The integration of GauPF, GauCF, and GauTF ensures that both geometric and visual details are preserved during reconstruction. And Octree-Guided sampling allows for efficient handling of complex geometries, reducing computational overhead by focusing resources on critical areas. This makes our method scalable and practical for complex objects. **Q4: Suitability of 3DGS for 3D Generation** Compared to NeRF, 3DGS offers several advantages, particularly in terms of rendering efficiency and high visual fidelity. The task of solving 3DGS generation is not only timely but crucial for advancing the field of 3D modeling. As 3D content creation continues to grow in demand, having efficient and robust methods like 3DGS becomes increasingly important for various applications, including gaming, virtual reality, and digital content creation. One of the primary contributions of our work is addressing the challenges posed by the discrete and unstructual nature of 3D Gaussian Splatting. Our approach provides three continuous Gaussian Splatting functions to effectively embed 3DGS into a generative model, allowing for efficient and accurate 3D generation. As mentioned in our previous response, fitting the Gaussian representation for each shape is quite efficient, taking only 3 minutes on a single NVIDIA RTX 3090 GPU. **Q5: Comparison with GS-Based Reconstruction Models** Upon review, we conducted supplementary experiments to compare the visual quality of our method with several SOTA 3D generative models, including LGM and other GS-based approaches. The results of these visual comparisons are showcased in Figure A and Figure B of the rebuttal PDF. The visual results clearly demonstrate the strengths of our method in capturing intricate details and consistency with conditions. Our approach benefits from an efficient representation of Gaussians that enhances both processing speed and visual output quality, making it well-suited for high-fidelity content creation. In Table A of the manuscript, we provide a comparison of the CLIP Score between our method and the other SOTA 3D generative methods including Shap·E, LGM(ECCV 2024) and DreamGaussian(ICLR 2024). Our method demonstrates superior CLIP Scores, indicating that DiffGS generates 3D content that is more faithful to the input conditions. We have also included a comparison of model parameter count and generation time in Table B. DiffGS has a considerably smaller number of parameters compared to LGM, leading to decreased memory consumption and computational overhead. Additionally, our approach enables quicker generation time than other SOTA 3D generative models, showcasing its ability to efficiently create high-quality 3D content. **Q6: Correction of Optimization Target in Equation (9)** We appreciate the reviewer's careful examination of Equation (9) and for pointing out the error in the optimization target. The revised manuscript will reflect this correction. [1] Shue, J.R., Chan, E.R., Po, R., Ankner, Z., Wu, J., Wetzstein, G.: 3d neural field generation using triplane diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20875–20886 (2023)
Summary: The paper introduces DiffGS, a text/image-to-3D generative model with 3D Gaussian splatting as its output representation. The model generates Gaussians from text/image condition by CLIP-augmented latent diffusion model (LDM), whose output is a latent vector that can be decoded into a triplane representation. DiffGS then translates this triplane representation by first sampling a plausible set of Gaussian locations from the learned implicit function (GauPF), and then reconstructs colors and shapes of these Gaussians with two more learned implicit functions (GauCF, GauTF) which are used as components of the objective function in the optimization-based generation. Therefore, the model consists of a (1) Gaussian LDM, a (2) latent-to-triplane decoder and (3) three implicit functions (GauPF, GauCF, GauTF). The (2, 3) decoder models (triplane decoer and three implicit functions) are trained in a VAE fashion, and the (1) Gaussian LDM is trained over the top of VAE’s learned latent space. The method is tested with ShapeNet and DeepFasion3D dataset with FID/KID metrics. Strengths: 1. The authors have attached supplementary materials that effectively show their code and outputs of their method. 2. The paper contains ablation study (Table 2) that justifies each component of their framework. 3. Visual results show that the proposed method effectively generates complicated geometries with Gaussian splatting representation. Weaknesses: 1. Since the paper targets text/image-to-3D generation with Gaussian splatting based on the domain-specific (e.g., ShapeNet/DeepFasion3D) latent diffusion model, I believe that the results should be compared with other similar 3D generative models, at least those mentioned in Section 2.2 by the author. **Particularly, the work seems to solve similar problem with LGM (ECCV 2024, code released), and therefore should be thoroughly compared with it.** 2. **The detailed structure of the proposed models are not presented in the paper.** For example, that is the dimension of the latent z in Figure 2, and why did the authors have chosen the structure? How does the authors model the Gaussian Splatting encoder that encodes the (fixed or not fixed number of) Gaussians into this latent z? Does the number of Gaussians fixed in this model? How are the architectures of GauPF/GauCF/GauTF designed? How does the DALLE-2’s LDM, which is originally proposed to model the latents of a 2D image, be used to model the latent triplane representation? How does the triplane decoder shaped? How many parameters are used in the models? Such information should be included in the main manuscript (or at least in the appendix) for proper presentation of the idea. 3. The proposed method uses post-optimization steps to infer the generated 3D Gaussian splatting. This seem to require **additional generation time in the inference steps.** This can be considered a weakness since there are already many large generative models of Gaussian splatting that generate samples in tens of seconds such as LGM. Even if the proposed model requires long delay, such delay can still be justified if the results are sufficiently detailed and the sampling time is sufficiently short (as how we acknowledge the contribution of 2D LDM models). However, there is no information of the exact generation time specified in the paper. 4. The positive correlation between the number of Gaussians / octree depths and the reconstruction quality enhancement in Table 3 and 4 seems to be trivial. I believe more interesting question is **how much the amount of increment in computational cost trades off against the quality enhancement**, etc. 5. Since the model uses CLIP to perform text/image-to-Gaussian generation, the paper should report **numbers that show the text-fidelity, e.g., CLIP-scores**. Comparing only with FID score (Table 1) does not seem to effectively demonstrate the superiority of the proposed method. As a summary, my key concerns are the presentation of the materials, the lack of comparison with the more recent papers (e.g., LGM) that appears in Section 2.2 of the main manuscript, justification of the sampling-time delays that seem to present, and the lack of quantitative comparison on text-fidelity. Unless these concerns are resolved, I believe that the paper is not ready to be published in the venue. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Although the authors claim for the generation of 3D Gaussian splatting, the actual generated sample is the triplane representation, which is realized as a Gaussian splatting with implicit neural representation. Since there are already a well-established triplane-to-NeRF decoding (e.g., TriNeRFLet: A Wavelet Based Triplane NeRF Representation, ECCV 2024, code released), are there **specific technical benefits of decoding this information into Gaussian splatting in this case?** 2. The proposed work involves a dedicated generative model specific to certain domains (e.g., ShapeNet). However, there are other branches of works that approaches text/image-to-3D generation leveraging the power of 2D diffusion models, e.g., DreamFusion and its offsprings that uses SDS losses, which seem to be mature at this moment. Concerning the recent achievements of the latter branch of works, I am not fully convinced to training a domain-dedicated 3D generator. May I ask **why the authors think training the dedicated 3D generator is a good way of solving 3D generation tasks?** 3. In line 109: lake → lack Please note that these questions are not counted in my overall scoring. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Yes, the limitations are addressed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer p5pi for the thorough review and valuable feedback. We have addressed each point in our response below. **Q1: Comparison with other recent 3D generative models** We refer the reviewer to the "Global-Q1: Evaluation Against Other SOTA 3D Generative Models" section of the global response for a comparison with other methods. **Q2: Detailed Structure of Proposed Models** Below, we address each point raised by the reviewer. 1. Latent dimension z: The latent vector z in Figure 2 of the paper is designed to capture the essential features of the Gaussian Splatting representation. We set the latent dimension to 256 to balance expressiveness and computational efficiency. The choice of 256 dimensions is empirically determined to balance encoding diverse 3D shapes with manageable model size. 2. Gaussian Splatting Encoder The Gaussian Splatting encoder transforms the input Gaussians into the latent space. We utilize a PointNet[1]-based architecture as the implementation of the encoder, which efficiently processes gaussian data . In detail, We adjust the dimensions of the input in PointNet within the SDF-VAE encoder to accept a K-dimensional 3DGS as input. 3. Design of GauPF/GauCF/GauTF GauPF, GauCF, and GauTF share a similar architecture consisting of three MLP-based blocks. The input latent vector has a dimension of 131, comprising the latent dimension sampled from triplane and the XYZ coordinates. The feature channels for the first two blocks are [131,512,512,512,512] and [643,512,512,512,512] with a skip connection preceding the second block. To leverage the properties of Gaussians, the last block of GauTF is designed uniquely. To ensure the generated scale remains within a certain threshold, a truncation operation is applied to the scale attribute. Besides, in order to achieve uniformity in the form of the rotation quadruple, we normalize it so that its last dimension has a unit length. Finally, we use the sigmoid activation function to restrict opacity values to the range of 0 to 1. 4. Use of DALLE-2’s Latent Diffusion Model (LDM) The core operation of LDM is performing diffusion on a one-dimensional latent space. This diffusion process is general and applicable to various data types, including 3D representations like triplanes. Although LDMs are frequently used for 2D image processing, their ability to handle latent spaces makes them suitable for broader applications. We begin by encoding the triplane data into a latent space suitable for latent diffusion. This encoding process captures the essential geometric and visual information needed for accurate 3D reconstruction. The LDM operates on this encoded latent space, applying diffusion processes. 5. Triplane Decoder Architecture The triplane decoder reconstructs the 3D shape from the latent space using transposed convolution layers, interleaved with batch normalization and activation functions. This structure is designed to progressively upscale the 2D feature maps while maintaining high fidelity and detail. 6. Parameter Counts The complete DiffGS model consists of approximately 127.4 million parameters. In Table B, we compare the parameter count of our method with other SOTA 3D generation methods. The results demonstrate the efficiency of DiffGS, which uses significantly fewer parameters than DiffTF (929.9M), ShapE (759.5M), and LGM (429.8M). **Q3: Addressing Post-Optimization Steps** We refer the reviewer to "Global-Q2: Addressing Post-Optimization Steps" for results and analyses. **Q4: Trade-Offs Between Computational Cost and Quality Enhancement** We refer the reviewer to ”Global-Q3: Balancing Computational Cost and Quality Enhancement“ for a detailed discussion on balancing computational costs with quality enhancement. **Q5: Inclusion of Text-Fidelity Metrics** We conducted experiments to evaluate the text fidelity of our text-to-Gaussian generation using CLIP-scores in Table A of the rebuttal PDF. Compared to LGM and other baseline models, our method shows a consistent improvement in CLIP-scores, confirming the superior text fidelity of our approach. **Q6: Decoding Triplane Representation into Gaussian Splatting** The triplane representation serves as an effective intermediate step that simplifies the complex task of directly regressing explicit 3D Gaussian attributes. Triplanes offer a structured approach to capture spatial information without being hindered by the non-structural nature of Gaussian splatting. The Gaussian splatting is inherently more efficient for training and rendering compared to volumetric approaches like NeRF. It utilizes point-based representations, which require less computational overhead and facilitate quicker convergence during training. The use of Gaussian splatting enables faster rendering speeds, making our approach suitable for real-time applications such as virtual reality and interactive 3D simulations. **Q7: Justification for Training a Domain-Dedicated 3D Generator** SDS-based methods require extensive optimization and are computationally intensive, particularly when distilling scores from 2D to 3D. They are also highly sensitive to hyperparameter settings like learning rates and noise schedules. Additionally, these models often produce the Janus problem in 3D shapes. We make a comparison with SDS-based Gaussian generative model DreamGaussian in Figures A and B, as well as Table B. The results demonstrate that our dedicated 3D generator DiffGS performs faster and produces more accurate, high-fidelity geometric structures than SDS-based methods. **Q8: Typographical Error** We thank the reviewer for identifying the typographical error in line 109, which will be corrected in the revised manuscript. [1] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their careful and detailed rebuttal with additional experiments that have resolved many of my initial concerns. I have a few remaining questions regarding the rebuttal. --- **Global-Q1: Evaluation Against Other SOTA 3D Generative Models** Regarding that the domain-specific SotA methods compared in the additional experiments such as LGM are trained with Objaverse dataset, it might be unfair to directly compare this with ShapeNet-fitted DiffGS for the generation of ShapeNet data. Would you share your thoughts on how DiffGS can be better (in terms of diversity and quality) if it is scaled to Objaverse dataset based on the shown experiments? --- **Q7: Justification for Training a Domain-Dedicated 3D Generator** According to Appendix C of the manuscript, I see training for ShapeNet already takes a week for a 8-GPU server. I think in order to make the contribution of this work practically useful, the method should be scaled up to at least Objaverse-scale datasets (perhaps in the future. I do not believe it is adequate to mandate conference papers to train on huge datasets, so I am not requiring the authors to do the experiments.) I am a little bit suspicious on the computational feasibility of this scaling up process. For example, training a 100-GPU cluster for six month for getting a domain-specific Gaussian splatting generator would be a waste of time compared to having a 10-minutes image-to-3DGS generator such as LucidDreamer [1]. Would you persuade me that the proposed approach is computationally feasible and practically meaningful? --- These are the last of my concerns. [1] Yixun Liang, et al., LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching, CVPR 2024. --- Reply to Comment 1.1.1: Title: Response to Reviewer p5pi (1/2) Comment: Dear Review p5pi, Thanks for your response and the positive assessment on our rebuttal. We response to each of your additional questions below. Please do not hesitate to let us know if you have any additional questions. **Discussion-Q1:Thoughts on the superiorty of DiffGS in terms of diversity and quality when scaling up** We believe that DiffGS offers a more flexible and effective approach to 3D Gaussian representation and generation. By disentangling the 3D Gaussian Splattings (3DGS) into three novel functions to model Gaussian probabilities, colors, and transformations, DiffGS delivers significant advantages in Gaussian quality, generation efficiency, and generation diversity. 1) DiffGS can scalably generate Gaussian primitives at arbitrary numbers. All the previous works that explicitly reconstruct Gaussians can only generate limited numbers of Gaussians. For example, TriplaneMeetsGaussian generates up to 16,384 Gaussians, and LGM generates up to 65,536 Gaussians. In contrast, we novelly introduce the GauPF, which models Gaussian probabilities using neural functions. A specially designed discretization algorithm to used to extract Gaussian geometries from GauPF by octree-guided sampling, enabling generating Gaussians at arbitrary numbers. The number of Gaussians is a critical factor in the rendering quality of 3D Gaussian Splattings. As shown in the ablation studies on Table 3, the number of Gaussians significantly affect the quality of Gaussian reconstructions. With the ability of generating arbitrary numbers of Gaussians, DiffGS offers a naturally superior solution for 3D Gaussian generation. 2) We designed DiffGS using latent diffusion models (LDM), which significantly enhances both the efficiency and generation diversity of DiffGS. While most popular approaches for creating 3D Gaussian Splatting (3DGS), such as LGM and TriplaneMeetGaussian, primarily focus on "reconstructing" 3DGS, following the recent trend of large reconstruction models (LRM), they fall short in "generating" 3DGS. The reconstruction-based methods do achieve convincing results on the learned data domain, but their capability of creating diversity 3D shapes are limited due to the "reconstruction" target. In contrast, diffusion models have demonstrated a strong capability in generating diverse samples, both with and without conditions. Leveraging diffusion models, DiffGS shows great capability in generating diverse and high-quality 3D Gaussians compared to the "reconstruction"-based method LGM. The comparisons are shown in Figure A and Figure B of the rebuttal PDF. Moreover, by training DiffGS with a diffusion model at latent space (i.e. LDM), it achieves remarkable efficiency in generating 3D Gaussians. As shown in Table B of the rebuttal PDF, DiffGS outperforms LGM, SSDNeRF, DiffTF, and Shap-E in terms of speed, even when generating 350K Gaussians, which is significantly more than the number of Gaussian that previous works (e.g. LGM) can produce. 3) DiffGS is capable of generating 3D Gaussians unconditionally or conditionally from text and images. Moreover, DiffGS is the first model capable of solving the task of Gaussian completion and Point-to-Gaussian generation. In contrast, previous models like TriplaneMeetGaussian, DreamGaussian and LGM can only generate Gaussians from images or text. Though the extensive experiments and applications, we demonstrate DiffGS is a general and flexible framework suitable for most of the tasks taking Gaussians as the generation target. --- Reply to Comment 1.1.2: Title: Response to Reviewer p5pi (2/2) Comment: **Discussion-Q2:Feasible and practical meaning of DiffGS compared to SDS-based approaches** We believe that training a native 3D generator like DiffGS offers a more promising future for 3D content creation compared to SDS-based methods. **(1)** SDS-based approaches require time-consuming optimization to distill 3D geometry and appearance from 2D diffusion models, often taking hours to converge. State-of-the-art methods like DreamFusion, Magic3D, MVDream, and Rich-Dreamer require 2-5 hours on one A100 GPU for optimization, as reported in their papers. The recent work LucidDreamer converges faster but still requires 36 minutes with one A100 GPU as reported in its paper. In contrast, DiffGS takes only less than 10 seconds for inference on a single 3090 GPU, which is several orders of magnitude faster than SDS-based methods. We further justify that training DiffGS for one day with eight 3090 GPUs is sufficient to achieve convergence for each class in ShapeNet. The time (5 days) listed in the Appendix C is for the full convergence, which is not necessary. We report the generation performance of training DiffGS for 12 hours, 24 hours and 5 days in Table G below. As shown, with the utilization of LDM, training DiffGS for one day is enough for convergence. It's important to note that DiffGS was trained on 3090 GPUs, which are significantly slower and have much less memory compared to the commonly used A100 GPUs (24GB vs. 80GB). The results highlight DiffGS's ability to scale to large datasets. We estimate that training DiffGS on the large 3D dataset Objaverse, which contains 1 M shapes, will take approximately 5-7 days to converge under 64 A100-80G GPUs. I believe the time is acceptable since we only need to train DiffGS for once. **_Table G: Ablation studies on the training time._** | | FID-50K Chair | KID-50K(%)Chair | FID-50K Airplane | KID-50K(%)Airplane | | :------: | :-----------: | :--------------: | :--------------: | :-----------------: | | 12 hours | 42.81 | 3.773 | 57.16 | 5.793 | | 24 hours | 36.69 | 2.244 | 50.04 | 3.864 | | 5 days | **35.28** | **2.148** | **47.03** | **3.436** Due to the limited time and computing resources during the discussion period, we are not able to train DiffGS under the large-scale Objaverse dataset. We will conduct experiments that scale up data sizes in the revised version. **(2)** SDS-based methods often lead to inconsistent generations, particularly the multi-head Janus problem. The 2D image diffusion models used in SDS lack an explicit understanding of both geometry and viewpoint. This absence of perspective information and explicit 3D supervision can result in the multi-head Janus problem, where realistic 3D renderings fail to achieve view consistency, causing every rendered view to be perceived as the front view. Although some works attempt to address this issue by incorporating view information into 2D image diffusion models (e.g., Zero-1-to-3, MVDream), they require both time-consuming 2D diffusion training (several weeks) and per-shape optimization. In contrast, native 3D generators like DiffGS are directly trained on 3D shapes, inherently ensuring 3D consistency and naturally avoiding the Janus problem. **(3)** A geometry initialization from native 3D generators significantly benefits SDS-based methods. The experimental results of recent SOTA SDS-based methods also demonstrate the necessity of introducing native 3D generation priors as the geometry initialization. We take LucidDreamer, which is mentioned by the reviewer, as an example. It leverages the generation results of Point-E, which is a native 3D generator taking colored points as the generation target, as the geometry initialization for SDS-based optimization. The ablation study shown in Figure 7 of the LucidDreamer paper demonstrates that the generation performance degrades without using the Point-E generations as the initialization for introducing consistent-aware 3D information. We believe that the advancements in the native generator 3D (e.g. DiffGS) will further contribute to the improvements of SDS-based methods (e.g. LucidDreamer) by providing superior geometry initialization. We are deeply grateful for your invaluable feedback and the time you dedicated to evaluating our work. Your comments and expertise are sincerely appreciated. Please let us know if there is anything we can clarify further. Best regards, Authors
null
null
null
null
Fast Rates for Bandit PAC Multiclass Classification
Accept (poster)
Summary: This manuscript deals with Multiclass (K labels) PAC Classification a partial monitoring scheme, as introduced by Daniely et al. ('11). The complexity for $(\epsilon,\delta)$-PAC of a naive uniform sampling algorithm would be $K/\epsilon^2 \log (|\mathcal{H}| / \delta) \big)$ where $\mathcal{H}$ is a finite family of classifiers. This manuscript introduces a two-step procedure that achieves the nearly optimal bound $(poly(K) + 1/\epsilon^2)\log(1/\delta)$. This bounds is optimal even in the simpler setting with complete feedback. Then, the authors extend the procedure to the infinite class with finite Natarajan dimension. Strengths: 1) The manuscript solves an open question in multi-class learning literature under partial monitoring. Although I am not an expert in the field, I feel that the idea a weighed ERM in the second step of the algorithm and to connect that to the guarantee of the SPIDER gradient estimates for Franck-Wolf algorithm. 2) The manuscript is very well written and the authors manage to convey well the main ideas behind their procedures. The proofs are clear. Weaknesses: None. Technical Quality: 4 Clarity: 4 Questions for Authors: Line 177: What is J_{x,y}? Could the procedure allows for a better dependency with respect to $K$? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Please see our comment below to the specified questions. * “Line 177: What is J_{x,y}?” This is a typo and should be replaced by $r_{x,y}$, this will be fixed in the final version. Thanks for catching! * “Could the procedure allows for a better dependency with respect to $K$?” We believe that the polynomial dependence on $K$ in the regret bound could be reduced, but we are currently unsure how exactly this can be done and what is the correct optimal dependence. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: The authors present a novel algorithm to solve the epislon-delta PAC bandit multi-class classification problem. For a finite hypothesis class, they give an algorithm with sample complexity O(poly(K) + 1/epsilon^2) that improves on the O(K/epsilon^2). They also similarly show that for a possibly infinite class they can also improve the state-of-the-art. Strengths: The paper solves an existing know issue without introduction too many new assumptions. They show the bounds match the optimal rate in the full-information regime. Their use of the Frank-Wolfe algorithm is also interesting. Weaknesses: The improvement is shown by moving from O(K/epsilon^2) to O(poly(K) + 1/epsilon^2). The constants here are on the order of K^8 and the time horizon is on the order of K^4. This means that if K is relative large, you mind end up doing worse than the previous bounds which limits the scope of impact of this work. Technical Quality: 4 Clarity: 3 Questions for Authors: What would be a scenario where this bound performs better than the previous work, although theoretically this is better? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Please see our comment below to the specified weaknesses and questions. * “...This means that if K is relative large, you mind end up doing worse than the previous bounds which limits the scope of impact of this work.” You are correct in that our current bounds improve upon previous results in regimes where $K$ is not too large relative to $1 / \epsilon$. However, we believe that removing polynomial dependencies in $K$ from the main $1/\epsilon^2$ term is a crucial and significant step towards a comprehensive understanding of the bandit multiclass problem. * “What would be a scenario where this bound performs better than the previous work, although theoretically this is better?” Since this is a purely theoretical study, we do not focus on empirical scenarios in which our algorithm may outperform existing ones. Our results formally show that if we aim to find an $\epsilon$-optimal policy where $\epsilon \ll 1 / K^4$, then our suggested algorithm uses considerably less samples than known algorithms which require $K / \epsilon^2$ samples. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your response. I maintain my rating of accept
Summary: This paper studies bandit multiclass classification in the agnostic PAC setting. For both finite and infinite hypothesis classes, they provide efficient learning algorithms (assuming access to a weighted ERM oracle) whose sample complexity significantly improves upon the previous best known rate of $\frac{K}{\epsilon^2}$. As a corollary, they show that unlike the realizable PAC setting, there is no price of bandit feedback in the agnostic PAC setting. The main algorithmic ideas involve a stochastic optimization technique for efficiently recovering a low-variance exploration distribution over the hypothesis class. Strengths: - This paper is very well written and easy to follow - The technical contributions are solid, results are interesting and surprising, and proof techniques are non-trivial. In particular, the fact that there is no price of bandit feedback in the agnostic PAC setting is interesting and not obvious apriori - This paper resolves an open question posed by Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011) - Overall, I feel that this paper is a nice contribution to learning theory that furthers our understanding of bandit multiclass learnability Weaknesses: There are no major weakness. That said, I think it would be nice to have: - a more detailed discussion of the realizable bandit PAC setting in the main text summarizing the known rates and algorithmic techniques - some concrete practical scenarios where the bandit agnostic PAC setting is realistic - discussion of the assumption of finite label spaces and some thoughts on the right characterization of bandit PAC learnability and sample complexity when $K$ goes to infinity - some intuition (beyond the fact that the proof works) about why a separation occurs between the realizable and agnostic settings with regards to the price of bandit feedback Technical Quality: 4 Clarity: 4 Questions for Authors: - What are the known lower bounds (if any) in the bandit agnostic PAC setting? Is a poly factor of $K$ unavoidable? - When $K$ is finite, bandit PAC learnability is equivalent to PAC learnability. Presumably, this is not the case when $K$ is allowed to be infinite. Do you have any insight on what the right characterization of bandit PAC learnability might be in this case? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Please see our comment below to the specified questions. * “What are the known lower bounds (if any) in the bandit agnostic PAC setting? Is a poly factor of 𝐾 unavoidable?” We can prove a simple lower bound of $K / \epsilon + 1 / \epsilon^2$ for bandit multiclass classification as follows: The $1 / \epsilon^2$ term arises from a standard argument for two labels with probabilities $1/2 + \epsilon$ and $1/2 - \epsilon$ (this is the full information lower bound). The $K / \epsilon$ lower bound holds for an instance with two examples $x_1$ and $x_2$ and $K$ labels, where the probability of sampling $x_1$ is $2 \epsilon$ and $x_2$ has probability $1-2 \epsilon$ (each example has a unique unknown correct label). A PAC learner for this instance must correctly predict the label of $x_1$, and to do that it needs at least $\Omega(K)$ samples of $x_1$, which takes a total of $\Omega(K / \epsilon)$ samples. We believe that the polynomial dependence on $K$ in our upper bound can be further improved to decrease the gap from the lower bound described above, but we are unsure how this can be done with our current techniques. * “...Do you have any insight on what the right characterization of bandit PAC learnability might be in this case?” Good question. Yes, there is a relatively simple characterization: a class H is learnable if and only if the following holds: (i) it has finite Natarajan dimension, (ii) there is a finite bound $K$ such that for every point $x$, the set {$h(x) : h \in H$} has at most $K$ distinct labels. Item (i) is clearly necessary, and it is relatively simple to prove that Item (ii) is necessary (it is necessary already for learnability in the realizable-case). For sufficiency, note that if Item (ii) holds then the class $H$ is effectively equivalent to a class over $K$ labels in total and therefore can be learned when its Natarajan dimension is finite, as follows from Theorem 2 in our work. We will add this to the final version of our paper, thanks for pointing this out! --- Rebuttal Comment 1.1: Comment: Thanks for the response.
Summary: This paper studies the problem of multiclass classification with bandit feedback, where one only receives information on whether the prediction is correct or incorrect without revealing the actual label. This can be viewed as a very specific case of the well-known contextual multi-armed bandits, where the cost vector contains a single value of $0$ and one's elsewhere. The paper shows that unlike the full-power contextual multi-armed bandits, where the sample complexity scales as $\Omega(K/\epsilon^2)$, this restricted version has a sample complexity that scales as $O(K^9+ 1/\epsilon^2)$. Moreover, their algorithm is computationally efficient given an ERM oracle. Strengths: This paper addresses a natural and fundamental problem that has attracted much attention in the literature. It introduces several original techniques, such as finding exploration distributions via a log-barrier potential as in (2). I also find the implementation of the Stochastic Frank-Wolfe algorithm using an ERM oracle to be quite interesting. The paper is quite clean and easy to read. I believe it is worth being published in NeurIPS and would have impact within the learning theory community. Weaknesses: I do not see any significant weaknesses in the paper. This is a neat, pure theory paper that addresses an interesting problem, at least within the community. I have a few minor remarks as follows: 1. It appears to me that the $K^9$ dependency mainly arises from the Stochastic Frank-Wolfe algorithm. Can this be improved? Perhaps with a computationally inefficient algorithm? 2. The paper claims to resolve an open problem from Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011). This sounds a bit overselling, in my opinion. After browsing the cited paper, I did not find any statement of this problem, particularly regarding the dependency on $1/\epsilon$. 3. Can the techniques developed in this paper be used in more general bandit settings to yield improved bounds? 4. The $J_{x,y}(h)$ after Eq (3) must be $r_{x,y}(h)$. 5. It appears to me that your technical approach is quite similar to that of [12]. Can the authors comment on the similarities and differences with that work? Technical Quality: 4 Clarity: 4 Questions for Authors: See above. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Please see our comment below to the specified weaknesses and questions. * “...Can this be improved? Perhaps with a computationally inefficient algorithm?” We believe that the sample complexity (specifically, the polynomial dependence on $K$) could be reduced even with a computationally efficient algorithm, but we are not sure how to do that with our current approach. * “The paper claims to resolve an open problem from Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011)...” Specifically, we refer to the paragraph titled “The price of bandit information in the batch model” in section 4 of Daniely et al. (2011), where the authors note that “...it would be interesting to find a more tight characterization of the sample complexity in the bandit setting” while observing that the known multiplicative gap was linear (up to log factors) with the size of the label space. Since our results establish a sample complexity bound that matches (up to log factors) the full information bound for small values of $\epsilon$, we view our results as a resolution of this question. * “Can the techniques developed in this paper be used in more general bandit settings to yield improved bounds?” In the more general contextual bandit setup where there is no presumed structure on the reward functions, the best sample complexity achievable is of the form $K / \epsilon^2$ which can be obtained by the trivial algorithm which pulls actions uniformly at random and returns the policy with the best empirical reward. For the online objective of regret minimization, we believe that variants of our approach can be used to obtain the optimal $\sqrt{KT}$ regret in contextual bandits with a computationally efficient algorithm which is perhaps simpler than the previous approaches of Dudik et al. (2011) and Agrawal et al. (2014), but we did not attempt to work this out in full detail. * “The $J_{x,y}(h)$ after Eq (3) must be $r_{x,y}(h)$.” You are correct, this is a typo that will be fixed in the final version. Thanks for catching! * “It appears to me that your technical approach is quite similar to that of [12]. Can the authors comment on the similarities and differences with that work?” The work of [12] considers the more general contextual bandit setting with the online objective of regret minimization. Their approach essentially amounts to adaptively computing low-variance sampling distributions which also induce high estimated reward in order to minimize regret over time. Our approach is similar in the sense that we also aim to compute a low-variance sampling distribution with which to estimate rewards of hypotheses, but in the setting we consider there is a special structure on the rewards (namely, a “sparse” one-hot structure) which allows us to compute such a low-variance distribution in time complexity that is polynomial in $K$, and then use it to estimate the rewards of all hypotheses uniformly with only $K / \epsilon + 1 / \epsilon^2$ samples. This approach would not work for regret minimization, since the initial phase alone would incur linear regret. We also add that our algorithm makes use of a stochastic Frank-Wolfe procedure in order to efficiently compute the low-variance sampling distribution, an approach which highly differs from the optimization scheme employed in each round in [12]. --- Rebuttal 2: Comment: Thank you for the response. I maintain my current rating, favoring the acceptance of the paper.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper considers the problem of bandit multi class classification, where the learner only receives the true label if their prediction was correct. That is at time $t$ the learner receives a training sample $x_t$ with unknown label $y_t$. They then predict one of $K$ labels and are told if their prediction is correct. This problem is well studied in the case of regret minimisation. The authors consider a fixed confidence setting where the learner must be PAC$(\epsilon,\delta)$ in terms of classification error when compared to the best classifier of some hypothesis class. In addition to receiving samples the learner has access to a ERM oracle on the hypothesis class. For the case of a finite hypothesis class the authors provide an algorithm with upper bound of the order $(K^9 + \epsilon)\log(\frac{N}{\delta})$ where $N$ is the size of the hypothesis class. For hypothesis class of potentially infinite size, but finite Natarajan dimension $d_N$, the authors provide an algorithm with upper bound $(K^9 + \epsilon)d_N\log(\frac{N}{\delta})$. Strengths: Bandit multi class classification has been studied, mainly in the case of regret minimisation. I think its interesting to consider the problem in such a PAC setting. The question of whether one should pay $K/\epsilon^2$ under bandit feedback, where one can achieve a rate of $1/\epsilon^2$ in the full information case, appears to have been a question of interest for some time. Weaknesses: As mentioned by the authors, in practical settings the number of labels may be large and the $K^9$ term is significant. While one should keep in mind, that this work appears to be the first to get rates not dependent on $K$ as $\epsilon \rightarrow 0$, it would be nice for the authors to discuss how future works could reduce the degree of the poly dependence upon $K$. It would also be nice to have more discussion when comparing to contextual bandits. The authors direct the reader to [13] for a more detailed comparison, however from what I can tell they consider solely regret minimisation therein and not a PAC setting. I believe the paper would benefit from a conclusion and perspectives on future work in the main text. Technical Quality: 3 Clarity: 3 Questions for Authors: Do the authors think there is a relation to contextual bandits in the PAC setting e.g. "Instance-optimal PAC Algorithms for Contextual Bandits" Li et al. ? Have the authors considered instance dependent bounds, specifically for a finite hypothesis class? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort. Please see our comment below to the specified weaknesses and questions. * “...it would be nice for the authors to discuss how future works could reduce the degree of the poly dependence upon $K$.” We believe that the polynomial dependence on $K$ could be reduced, but we are currently not sure how this can be done with our current techniques. One possibility is to employ an adaptive approach which combines estimated reward maximization together with lowering the variance of the sampling distribution over time. We will incorporate a short discussion in the revision, thanks for this suggestion. * “It would also be nice to have more discussion when comparing to contextual bandits…” The vast majority of previous works on contextual bandits study the online objective of regret minimization. Our setting of bandit PAC multiclass classification most closely resembles a contextual bandit setup with a different objective of finding a nearly optimal policy with as few samples as possible. * “I believe the paper would benefit from a conclusion and perspectives on future work in the main text.” Agreed - we will add a conclusion section in the revision, incorporating a discussion of the two points you suggested earlier. * “Do the authors think there is a relation to contextual bandits in the PAC setting e.g. "Instance-optimal PAC Algorithms for Contextual Bandits" Li et al. ?” Thank you for bringing this paper into our attention; we will make sure to cite and discuss it in subsequent versions of our paper. The work of Li et al. considers the PAC variant of contextual bandits, where no structural assumptions are made apriori on the reward functions. In this setting, they prove instance dependent sample complexity bounds, from which worst-case bounds of the form $ \approx K / \epsilon^2$ can be inferred. To our understanding, there is no direct relation between instance dependent bounds and the single-label classification setting which we consider, as even for the simple case of multi-armed bandits, the standard sample complexity bound of the form $\sum_i (1 / \Delta_i)$ does not suffice to obtain an improved result for sparse rewards. To obtain adaptivity to sparsity, the bounds must actually be variance-dependent (rather than gap-dependent), which to our understanding is not addressed in the work of Li et al. * “Have the authors considered instance dependent bounds, specifically for a finite hypothesis class?” We have not previously considered instance-dependent (that is, gap-dependent) sample complexity bounds, as obtaining improved instance-independent bounds, regardless of sparsity, is in itself a highly nontrivial problem. Obtaining such bounds in the multiclass setting is a fantastic question for future research.
null
null
null
null
null
null
Energy-based Epistemic Uncertainty for Graph Neural Networks
Accept (spotlight)
Summary: This paper explores the challenges associated with quantifying uncertainty in Graph Neural Networks (GNNs), particularly in domains involving interconnected data such as graphs. The authors propose a novel method called GEBM, which aggregates energy at various structural levels. This approach enhances predictive robustness and improves performance in distinguishing between in-distribution and out-of-distribution data across different anomaly types and datasets. Strengths: 1. The paper introduces a novel method based on energy considerations. The authors provide sufficient background and related work on the topic, making the main idea accessible even to readers unfamiliar with the subject. 2. Theoretical proofs and theorems are adequately presented to support the proposed methodology. The paper includes comprehensive experimental settings and achieves state-of-the-art performance across various tasks. 3. The paper is well-organized and well-written, enhancing clarity and readability. Weaknesses: 1. The experiments predominantly focus on node-level tasks. It would be beneficial to explore how the proposed method performs on graph-level tasks as well. 2. The paper lacks an ablation study specifically on different types of energy used in the GEBM model. Understanding which energy components are critical remains unclear. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From the equations presented, it appears that this method is not limited to graphs. Are there potential challenges in applying this method to tasks beyond graph-related domains? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are discussed in this paper. GEBM is a post hoc epistemic estimator; it does not improve aleatoric uncertainty or its calibration. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and are happy that they like our paper overall. **Weakness 1, Graph-Level Tasks**: Indeed, transferring GEBM to graph-level tasks is an interesting avenue: The energies proposed by our framework are designed with a node-level objective in mind. If suitable knowledge of how graph-level properties relate to individual nodes, GEBM would certainly offer a powerful framework to combine them effectively. Devising an aggregation scheme would be a daunting problem that is not straightforward to solve, in general, and beyond the scope of our work. We will mention that direction in the future work, L.332: “Future work may build upon the framework of aggregating energies at different scales for graph-level tasks by defining and adequately aggregating node-level energy at different scales into a per-graph measure.” **Weakness 2, Ablation of Energy Types**: There seems to be a misunderstanding regarding this ablation study. We provide ablations regarding the different energy types (local, group, and independent) for all distribution shifts and datasets in Tables 3 and 26 of our paper. We will highlight that this is an ablation of the individual energies more clearly by adding the following to our paper, L297: “Each of the former corresponds to only using one of the energies that the GEBM framework is composed of.” We hope that this sufficiently avoids this potential misunderstanding for future readers. **Question 1, Non-Graph Domains**: It is true that energy-based models are not limited to the graph domain, in general, and have also successfully been applied to i.i.d. domains [1]. As we showcase in Section 5.3 (in particular Table 3) of our paper, the effectiveness of GEBM primarily comes from aggregating energies at different structural scales in the graph. This is inherently bound to graph problems. However, we agree with the idea of the reviewer that also other domains may benefit from the paradigm of composite energy at different resolutions: One could, for example in the domain of time series, define energies at different frequencies and aggregate them into a single measure similar to GEBM. Also, density-based regularisation is likely to improve EBM-based uncertainty in other domains. The key challenge would be to develop energies that arecapable of capturing anomalies of different types. How these terms would look most likely depends on the problem. We will acknowledge this nice idea in the future work section of our paper, L.332: “The effectiveness of GEBM also motivates the development of aggregate energy-based epistemic uncertainty for other domains. We leave transferring our approach to non-graph problems for future work.” Overall, we want to thank the reviewer for their review. We hope that we could clarify the misunderstanding regarding the ablation asked in Weakness 2 and appropriately address how GEBM could be extended to non-node-level or even non-graph-level tasks in the future work section of our revised manuscript. References: [1]: Liu, Weitang, et al. "Energy-based out-of-distribution detection." *Advances in neural information processing systems*33 (2020): 21464-21475. --- Rebuttal Comment 1.1: Title: Reply to the rebuttal Comment: Thanks for your rebuttal. My concerns have been partially addressed. I would like to maintain the rating. --- Reply to Comment 1.1.1: Comment: We are glad that we could resolve some concerns and want to thank the reviewer again for the time spent on the review, in particular for pointing out interesting directions for future work.
Summary: The paper defines an integrable (regularized) energy function to capture epistemic uncertainty via energy a pretrained model. The energy function is a function of the logits so the method is a post-hoc model agnostic. The authors define a diffusion-based hierarchical energy propagation (structure agnostic + local diffusion + group diffusion) which both leads to quantification of uncertainty in graphs, and an evidential model prediction. Strengths: I count the regularization of the energy function and the theory behind it as a strong point of the paper. Also defining an energy-based model for graphs to address uncertainty is a strong starting point for uncertainty quantification on graphs which is really under-explored by the current time. I also see that the authors have provided a complete experimental setup (with some minor exceptions which I addressed in the weaknesses). In total, I find this paper a strong paper, however I believe that it can be stronger with a better flow of the text, and more contribution in theory for the quality of the UQ. Weaknesses: 1. The authors evaluate their method structurally (line 227) via leaving least homophilic nodes or low-page rank centrality as o.o.d. This seems like there is an implicit assumption of homophily in the graph which is not stated anywhere. In other words, I assumed that the propositions, or setup I should see an assumption of "homophily $\ge$ some constant on expectation...". I see that they referred to this assumption in the limitations, but it is better to be mentioned somewhere in other sections as well. 2. *Clearity:* However successful the method is, I do not see a clear intuition on why these three levels of propagation should be combined and why all are aggregated with weight = 1. For this, I expected an intuitive introductory experiment to clearly show what happens to a node before and after each certain diffusion. More importantly, I see theory to show that the regularization $\pm$ diffusion is integrable, or is not infinity anywhere, but I did not find any theory behind why the approach leads to a good uncertainty quantification in the end; like comparing it with an oracle or finding bounds on the distance from unseen ground truth probability. I also did not see any synthetic experiment in this direction which might help a lot. 3. *Experiments:* (1) I see that in scores like ECE and Brier score the method is not the best. Is there any intuitive explanation for why this is the case? I also strongly recommend the authors to mention that in the limitations of the paper. (2) I see the absence of study on models that have diffusion at the probability space; e.g. APPNP. If the GEBM can improve upon these methods then clearly there is some additional information passed in the energy domain. 4. *Minor Typos:* (1) Line 237 the term "fitted" is used twice. More important (2), in Line 163 there should be "for $\boldsymbol{x} \in {Q}_l$" added somewhere to show that the definition of $f(\boldsymbol{x})$ is limited to that polytope. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Building on weaknesses no. 1. (*W1*), do you assume some homophily property like $\mathbb{P}_{v_i \sim v_j}[y_i = y_j] \ge p$? Is heterophily graph a theoretical limitation of your approach? If yes can you elaborate on the theoretical insight behind it? Note that I see the heterophily is mentioned as a limitation but mostly I can not find a sound explanation of why it is other than just leaving it as an assumption. 2. In evaluation with feature perturbations, why didn't the authors use a random XOR shift instead of a total replacement of features? In that case, you have control over the magnitude shift; intuitively I expect the uncertainty to grow with a correlation to the perturbation probability, but here I can just see the endpoints of the experiment I just mentioned -- fully perturbed node and original node. In general, you can also define the feature shift by randomly selecting from the noise or the original features and controlling the randomness as the magnitude of the shift. 3. In Fig. 2. (robust evidential inference), I can not understand why the result is non-trivial. If the graph is homophily, a simple diffusion over predicted probabilities with a strong coefficient can have a significant denoising effect in the prediction. This is especially in case the perturbation is sparse. What is the model evaluated in Fig. 2? Does it have a similar enhancement in robustness for models that already have a diffusion step like APPNP? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the limitations are mentioned clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their in-depth review and interesting questions and are happy to see that they find our paper strong. **Weakness 1 & Question 1, Homophily Assumption**: We assume homophily throughout our work. We agree that is benificial to explicitly state this assumption early on and add to our background section, L.66: “Our work is concerned with homophilic graphs, i.e. edges are predominantly present between nodes of the same class.” All diffusion processes we study (Appendix C.6) rely on homophily. Formally quantifying the effectiveness on these operators for different problems with respect to an explicit homophily value is difficult. There are, however, empirical studies on the performance of GNNs that rely on these diffusion processes [2]. We have addressed this in L.197: “Intuitively, this is a smoothing process that relies on the in-distribution data to be homophilic. In case of non-smooth (heterophilic) data, the energy will be high for in-distribution data which is undesired behavior.”. We believe that formally studying the homophily assumption for graph diffusion is future work. **Weakness 2, Energy Types and Weights:** Uniform weights do not assume prior knowledge about the problem. We believe that tuning these weights on out-of-distribution data may be unrealistic, as such data is unavailable in real scenarios. The effectiveness of even the most uninformed (uniform) choice shows GEBM’s merits on different distribution shifts without any tuning. We agree that the paper benefits from a motivating experiment and intuition for each energy type. **1. Independent Energy** uses no diffusion and therefore targets anomalies at individual nodes. Formally, this is motivated as diffusing energy from non-anomalous low-energy neighbors would decrease the energy of an anomalous node. **2. Group Energy** uses diffusion to increase the energy of a node if anomalous high-energy neighbors are present and is fit to detect cluster-level anomalies. A similar formal argument can be made here. **3. Local Energy** - in contrast to Group Energy - does not aggregate class-wise energy before the diffusion. Therefore, it aggregates local average per-class evidence for a node and can identify evidence conflicts arising from structural anomalies. We devise synthetic experiments that support the intuition behind all three energy types in Figure 4 (global response): First, we induce artificial structural anomalies into real data by iteratively introducing a left-out-class (anomalous) cluster node-by-node. Group Energy shows the highest increase. Second, we induce per-node anomalies into a synthetic SBM graph and increase the magnitude, and observe Independent Energy to be the most sensitive. Lastly, we sample SBMs of varying heterophily (measured as the log ratio of inter-class to intra-class edge probabilities) and confirm that only Local Energy detects the structural anomaly. We propose to add this synthetic experiment along with the intuitive explanation to Section 4 of the updated paper. We would be happy to hear the reviewer’s opinion on that. **Weakness 3, Calibration and APPNP**: GEBM is a post-hoc estimator and does not affect the calibration of the backbone classifier and we report its ECE and Brier score for completeness only. We will explicitly highlight this more clearly in the limitations section, L.330: “The GCN backbone used in this work does not consistently achieve the strongest performance in both tasks.”. We also want to point out the calibration could easily be improved e.g. with temperature scaling [1]. We view this as outside the scope of our work as it is unrelated to GEBM’s epistemic uncertainty. We also add APPNP to the ablation of model backbones (Table 1, global response). GEBM is the only method that consistently detects all families of distribution shifts. This confirms that multi-scale uncertainty also improves on models based on just diffusion. We are thankful for that pointer as it makes a strong point in favor of GEBM. **Weakness 4, Typos:** Thanks, we fixed that typo. **Question 2, XOR Feature Shift**: Thanks for the nice proposal: We experiment with this XOR shift (Figure 5, global response) and observe that GEBM is the only method that reliably provides good performance with an increasing fraction of perturbed features. **Question 3, Advantage of Evidential Inference**: The model evaluated in Figure 3 is a GCN. We agree that diffusion in general can denoise anomalous predictions. The key advantage of evidential methods is that with increasing anomaly severity the evidence approaches zero while, for example, logits are provably likely to approach infinity. Therefore, a fixed amount of denoising steps is sufficient to recover from arbitrarily severe corruption while non-evidential methods require more diffusion iterations the stronger the anomaly is. As requested, we can provide evidence for this through an experiment with APPNP (Figure 4, global response). The evidential interpretation of GEBM on an APPNP is significantly more robust than APPNP alone. We add to the paper, L.289: “The advantage of this evidential perspective is that the evidence approaches zero for increasingly severe distribution shifts. Therefore, a fixed number of diffusion steps effectively counteracts the influence of anomalous neighbors when making predictions for a node.” We are very grateful for the pointers. We believe that the changes to our manuscript and additional experiments help to clarify and consolidate the core assumptions and results of our work. **References:** [1]: Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger. “On Calibration of Modern Neural Networks”. International Conference on Machine Learning (ICML) 2017. [2]: Palowitch, John, et al. "Graphworld: Fake graphs bring real insights for gnns." *Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining*. 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply. My concerns were partially addressed. With the informative reply from the authors, I find this paper an acceptable and strong study. This is why I increase my score. --- Reply to Comment 1.1.1: Comment: We are glad that we could address the reviewer's concerns and believe that our paper benefits from additional experiments prompted by their feedback. Thank you for the very helpful review!
Summary: This paper introduces a method for post-hoc epistemic uncertainty estimation in logit-based Graph Neural Networks (GNNs) by aggregating energy scores at different levels, including node, local, and group levels. Extensive experiments show the effectiveness of the proposed framework. Strengths: 1. The paper rigorously evaluates the proposed method under various experimental conditions, such as out-of-distribution (OOD) selection, different GNN backbones, and both inductive and transductive evaluation settings. 2. It comprehensively aggregates uncertainties at multiple levels in the graph, including node-level uncertainties, class-specific neighbor information, and propagated energy through diffusion. 3. The manuscript is well-structured, with a clear presentation of concepts, logical flow, and detailed preliminary knowledge. Weaknesses: 1. The paper lacks a detailed discussion on the selection of hyperparameters, especially for the diffusion module $P_A$. Specifics about the parameters $\alpha$ and $t$ mentioned in Appendix C are not sufficiently discussed. Including ablation studies on different graph diffusion architectures, such as label propagation referenced in the Appendix or APPNP used in the GPN paper, would enhance the paper. 2. The paper states that common GNNs suffer from overconfidence due to their similarity to findings on ReLU neural networks[1]. However, literature [2] [3] suggests that predictions from shallow GNNs are typically under-confident. The paper will benefit from evidence on the over-confidence issue of GNNs. 3. Section 4.4 discusses the relationship between energy scores from logit-based classifiers and total evidence in evidential models. The paper lacks an explanation for why the proposed model outperforms evidential models in epistemic uncertainty prediction, particularly how it addresses the feature collapsing issue in density-based models [4]. [1] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 41–50, 2019. [2] Wang, Xiao, Hongrui Liu, Chuan Shi, and Cheng Yang. "Be confident! towards trustworthy graph neural networks via confidence calibration." Advances in Neural Information Processing Systems 34 (2021): 23768-23779. [3] Wang, Min, Hao Yang, Jincai Huang, and Qing Cheng. 2024. “Moderate Message Passing Improves Calibration: A Universal Way to Mitigate Confidence Bias in Graph Neural Networks”. Proceedings of the AAAI Conference on Artificial Intelligence 38 (19):21681-89. https://doi.org/10.1609/aaai.v38i19.30167. [4] Mukhoti, Jishnu, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. "Deep deterministic uncertainty: A simple baseline." arXiv preprint arXiv:2102.11582 (2021). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the paper perform inductive training on the GCN backbone when OOD nodes and edges are excluded during the training phase? Does it use graph sampling or data augmentation techniques? 2. In corollary 4.3, what is meant by ' any $x\in \mathbb{R}^d$ ’? Please provide a precise range for $x$ or probability. 3. In the Equation (9), the regularized energy from three structural scales equally contributes to the final energy score. Table 3 shows varying impacts of energy at these scales. Why was the decision made to use equal weighting? 4. There are inconsistencies between some model names mentioned in Section 5.1 and those in the tables. 5. What are the differences in distribution shifts used in this paper compared to those in GPN or GNNSafe, and why did the authors make these changes? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: YES Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their very thorough review and specific pointers for improvements. We are happy that the reviewer likes our rigorous evaluation and structure. **Weakness 1, Ablations on Diffusion**: We ablate $t$ and $\alpha$ and the diffusion operator in Figure 1 (global response) and report performance on leave-out-classes, feature perturbations, and the homophily shift. Label Propagation is used per default in GEBM and GNNSafe while APPNP and GPN use symmetric diffusion. As expected, low $t$ and high $\alpha$ aid node-level anomaly detection while the opposite holds for cluster and local shifts. Label-Propagation achieves satisfactory performance over the entire range of diffusion hyperparameters which justifes using it in our experiments. In particular, we did not tune any hyperparameters for good o.o.d.-detection. As stated in Appendix C.6, it does not bias the energy toward high-degree nodes which explains its advantages over the other diffusion types. It performs well over a broad range of hyperparameters making GEBM less sensitive to those. **Weakness 2, Evidence on GNN Overconfidence**: Previous work indeed finds that GNNs are under-confident on in-distribution test nodes. Our claim regards confidence under distribution shifts for which we are not aware of similar studies. We supply evidence for the overconfidence of GNNs in Figure 2 (global response): We find that both logit-based and unregularized energy-based confidence increase far from the training data. This confirms our theoretical analysis and justifies the use of distance-based regularizers. We are grateful for this pointer, and adapt our paper to explicitly disambiguate (L.157): “We remark that previous studies found GNNs to be underconfident on in-distribution data […] while the aforementioned issue of overconfidence arises from high distance to training data induced by a distribution shift (see Appendix …).” **Weakness 3, Advantage over Evidential Methods and Feature Collapse**: This is an interesting point: We do not claim energy to be the strictly superior choice to evidential methods. The ablation in Table 3 shows aggregating uncertainty at different structural scales is the driving factor in GEBM’s effectiveness: This enables it to outperform the evidential GPN. Transferring this paradigm to evidential methods is an interesting direction for future work which we added to L.332: “While GEBM enables robust evidential inference, future work may build upon its paradigm of aggregating different structural scales in the graph for evidential methods.”. One merit of EBMs is that they offer a theoretically sound and well-motivated way to combine uncertainty at different structural scales. Regarding feature collapse, we are not aware of any evidence for this phenomenon in GNNs. We expect that strong spectral normalization (small $\sigma$) helps detect severe feature perturbations. We confirm this by ablating spectrally normalized GCNs in Table 1 (global response), but do not find improvements consistent over all shifts. It also decreases performance on other distribution shifts. We conjecture that either feature collapse is not as prevalent in GNNs (potentially due to smoothing) or that L2-based measures as in [1] are not suitable for the graph domain. While beyond the scope of our work, we believe this to be a daunting direction for future work and add to L.332: “Further development regarding the density-based regularizer, for example by studying the effects of feature collapse […], may improve the energy that GEBM is based on.” **Question 1, Data Augmentation:** We do not perform any data augmentation in our work. **Question 2, Precise Definition for x**: Thank you for the pointer, this was phrased imprecisely: The statement holds for any x almost surely, i.e. the set of x for which it does not hold (the kernel space of the affine function) has measure zero. We adapt our phrasing to “and any $x \in R^d$ almost surely:”. **Question 3, Energy Weights**: We choose equal weights as we do not assume prior task-dependent knowledge about which structural scale is more important than others. We see tuning these weights on o.o.d. data as unrealistic since such data is unavailable in real scenarios. We believe that the effectiveness of even the most uninformed (uniform) choice underlines the merits of the paradigm behind GEBM. Practitioners can also adapt these weights to incorporate prior knowledge. If the reviewer thinks changing Equation (9) to account for weights is beneficial, we are happy to adapt the paper accordingly. **Question 4:** Thank you for pointing it out, we fix this by changing “GNNSafe → GCNSafe” and “Heat → GCN-HEAT”. **Question 5**: Our suite of distribution shifts is a superset of the GPN benchmark, with the most notable addition being a structural shift. The single-graph distribution shifts of GNNSafe fall into the same categories as our benchmark: First, they, too, use left-out-classes. Second, they interpolate between node features similar to our feature perturbations. While there is no guarantee this provides semantically meaningful features (especially for bag-of-word features), we use different noise distributions to control similarity to in-distribution data by matching its modality. Lastly, GNNSafe uses a fixed SBM to perturb the structure: Our homophily-based shift induces a more realistic structural shift as the o.o.d. nodes are drawn from real data. Our benchmark studies the same types of shifts as both related works pointed out by the reviewer. Should they believe that we miss a relevant setting, we are happy to include it in the updated paper as well. We again want to thank the reviewer for the interesting and helpful discussion points that helped us to improve the manuscript. **References:** [1]: Mukhoti, Jishnu, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. "Deep deterministic uncertainty: A simple baseline." arXiv preprint arXiv:2102.11582 (2021). --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their efforts in providing additional experiments and clarifications. They have addressed my concerns, and I have increased my score. Additionally, I agree with most of the points raised in the rebuttal, particularly regarding feature collapsing and under/over-confidence scenarios. I am also interested in exploring the differences and commonalities between energy-based models and evidential-based models. --- Reply to Comment 1.1.1: Comment: We are happy that the reviewer finds the additional ablations and clarifications helpful and we, too, believe it makes the paper stronger. Thank you for the very useful input!
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for the time spent providing valuable feedback. We added all experiments and ablations provided in the pdf to the final version of our manuscript. We briefly want to summarise the most relevant additions to the paper. In the individual rebuttals, we propose word-by-word changes for these points. - We add APPNP as another GNN backbone to our ablation study to show that the success of GEBM is rooted in describing uncertainty at different structural scales rather than simply diffusing the corresponding scores. On APPNP, GEBM outperforms existing methods and enables more robust evidential inference. - We ablate different diffusion modules and hyperparameters to justify the choice we made for GEBM and find that GEBM is highly effective over a broad range of settings. - We justify each energy type and provide an introductory synthetic experiment to ease an intuitive interpretation. - We supplement additional evidence on the over-confidence of GNNs, distinguish it clearly from under-confidence on-distribution data, and elaborate on the role and empirical relevance of feature collapse for these models. - We provide clear directions for future work that extends to graph-level and non-graph-level domains. Overall, we are very happy with the constructive input and believe that based on the reviews we were able to further solidify our approach from an intuitive, theoretical, and empirical perspective. Pdf: /pdf/fa2a3b8d6fd13afd7a53c81960ecf65c67d3f799.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Linear Time Approximation Algorithm for Column Subset Selection with Local Search
Accept (poster)
Summary: This paper considers the Column Subset Selection (CSS) problem. In this problem the input is an arbitrary $A\in \mathbb{R}^{n\times d}$ and a positive integer $k$ ($k$ is thought to be much smaller than $\min\{n,d\}$). The goal is to output a subset of $k$ columns of $A$ denoted by $S\in \mathbb{R}^{n\times k}$ such that the residual error $\|A- SS^{\dagger}\|_F^2$ is minimized (here $S^{\dagger}$ is the pseudoinverse of $S$). Previously all except one algorithms for this problem ran in time $\Omega(\min\{nd^2,n^2d\})$ time with $O(k^2)$ approximation factors. One algorithm of Deshpande and Vempala did run in linear time, i.e. $O(nd)$ time but had $(k+1)!\sim k^k$ approximation factor. The contribution of this paper is to design an algorithm that achieves linear $O(nd)$ runtime and has $100(k+1)$ approximation ratio. Strengths: This paper has many strengths. The first is dramatically improving the previous best approximation ratio which was practically infeasible to a reasonable one for small values of $k$. The second is that the authors show the practical feasibility of the algorithm by demonstrating that it is at least 10 times faster than other algorithms across all datasets they consider. Weaknesses: I dont see any major weaknesses. One minor weakness is that the experimental section lacks a discussion regarding the parameter settings and experimental setup for the baseline algorithms. Technical Quality: 4 Clarity: 3 Questions for Authors: Can the authors please share the experimental details and parameter settings that are used for implementing the algorithms. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Authors have addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1. Can the authors please share the experimental details and parameter settings that are used for implementing the algorithms.** Response: We thank the reviewer for raising this question and apologize for not making it clearer in the paper. Below are the experimental details and parameter settings used in our implementations: - Algorithm Implementation: The algorithms are implemented using Matlab. For randomized algorithms, we ensured reproducibility by setting random seeds. The implementation details, including the source code, will be made available in a public repository for transparency and reproducibility. - Algorithms and Parameter Settings: In our experimental evaluation, we consider five distinct algorithms as the following summary: 1. The TwoStage algorithm involves two processes: in the first stage, $\Theta(k\log k)$ columns are sampled according to the properties of the top-$k$ right singular vectors of the input matrix. In the second stage, exactly $k$ columns are selected via an RRQR factorization of the subset matrix formed by $\Theta(k\log k)$ columns. The algorithm includes an oversampling parameter $c$ that controls the constant in $\Theta(k\log k)$. We test all integer values of $c$ from 1 to 50 to find the solution with the lowest error. Due to the algorithm involving randomness, we construct 40 submatrices by sampling in the first stage, and then run the RRQR factorization to obtain a solution for each submatrix. Finally, we select the best result from 40 solutions. 2. The Greedy algorithm is a deterministic algorithm. The core idea is to iteratively find the column $v$ that minimizes $f(S \cup v, A)$ for the current solution $S$ until $S$ contains exactly $k$ columns. We run the algorithm once for each varying $k$ value and return the solution. 3. The VolumeSampling algorithm is a randomized algorithm. It uses a sampling probability proportional to the volume of the parallelepiped spanned by the $k$-column subset. In VolumeSampling, we use an algorithm for computing the characteristic polynomial of an $n \times n$ matrix as described in Section 16.6 of [1]. 4. The ILS algorithm is a heuristic algorithm. It starts by uniformly selecting $k$ columns to construct an initial solution and then improves the quality of the current solution using a heuristic local search operator. We set the number of iterations to $2k$. 5. The LSCSS algorithm, given in Algorithm 1 of this paper, uses two-step mixed sampling and local search methods. We set the number of iterations to $2k$, as in the ILS algorithm. - Experimental Environment: Experiments are conducted on a machine with 72 Intel Xeon Gold 6230 CPUs and 4TB memory. The operating system is Ubuntu 16.04 LTS. The implementation is done in Matlab 2015, with necessary compatibility settings applied. - Evaluation Metrics: We used the error ratio $\Vert A - SS^\dagger A \Vert_F^2 / \Vert A - A_k \Vert_F^2$ to evaluate the performance of the algorithms, where $A_k$ is the best rank-$k$ approximation of $A$. The execution time of each algorithm is recorded to compare computational efficiency. - Experimental Procedure: We test the TwoStage, VolumeSampling, ILS, and LSCSS algorithms on each dataset 10 times to calculate the average error ratio and running time. Since the Greedy algorithm is deterministic, it is tested only once per dataset, with both its error ratio and running time recorded. - Experimental Results: We divide the experimental results into two parts. The first part, presented in Tables 3 and 4, contains the error ratios and running times for 8 large datasets. The second part, presented in Tables 6 to 12, contains the detailed results for 16 smaller datasets. The experimental results show that our algorithm is at least 10 times faster than other algorithms on large datasets, achieving the best accuracy on most datasets. Additionally, on small datasets, our algorithm is at least 2 times faster than other algorithms and achieves the best accuracy in most cases. We will include these experimental details and parameter settings in the revised version of the paper. [1] Bürgisser Peter, et al. Algebraic complexity theory, volume 315 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1997. With the collaboration of Thomas Lickteig. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks to the authors for the details and for addressing my question. My score remains the same.
Summary: This paper proposes a new algorithm for the column subset selection problem which combines a local search-type strategy with adaptive sampling techniques to obtain an algorithm running in time linear in $nd$ (i.e. the size of the input matrix) for constant $k$. The resulting solution selects exactly $k$ columns and achieves an approximation ratio at most $O(k)$ against the optimal rank $k$ solution given by the SVD, which is comparable to the best known approximation ratio of $(k+1)$. The main idea is that each iteration of the local search can be made to run in linear time by reducing the number of candidates considered via adaptive sampling. Strengths: This work achieves a new point in the trade-off between running time and approximation ratio for the problem of column subset selection. Column subset selection is an important and heavily studied problem and thus this work should be of interest to many researchers, especially those working in numerical linear algebra. Weaknesses: The analysis of the algorithm is highly reminiscent of the adaptive sampling analyses from works such as https://faculty.cc.gatech.edu/~vempala/papers/relative.pdf https://dl.acm.org/doi/10.1145/1250790.1250884 This includes, for example, the idea of relating the cost difference from the optimum to a bound on the success probability of adaptive sampling (Lemma 3.8) and then boosting with repetition. The only new point here seems to be to use similar ideas to throw away “bad” columns (i.e. use local search) so that the solution size stays at $k$ rather than growing with each iteration. However, this is still a nice and new idea and requires delicate work to make this go through. Technical Quality: 3 Clarity: 2 Questions for Authors: Questions - For the problem of row sampling for $\ell_p$ subspace approximation, obtaining tight bounds on the number of rows required for a (1+eps) approximation is an open problem under active research (see, e.g., https://dl.acm.org/doi/10.1145/1250790.1250884). Can the combination of local search and adaptive sampling be used to obtain state of the art results for this problem? Minor comments - It would be helpful to move the main theorem Theorem 3.9 earlier so that the formal main result can be seen earlier. In particular, the notion of approximation ratio should be clarified before Table 1, since otherwise it is confusing whether the approximation ratio compares against the best rank k solution or the best subset of k columns. - The wording of Lemmas 3.7 and 3.8 could be unified for sake of consistency. - You can remove the section 7 header. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1. For the problem of row sampling for $\ell_p$ subspace approximation, obtaining tight bounds on the number of rows required for a (1+eps) approximation is an open problem under active research (see, e.g., https://dl.acm.org/ doi/10.1145/1250790.1250884). Can the combination of local search and adaptive sampling be used to obtain state of the art results for this problem?** Response: We thank the reviewer for raising this interesting question. For the CSS problem, we utilize a linear-time local search method, overcoming the difficulty of enumerating swap pairs in each local search step. However, we have only provided local search solutions for the CSS problem with the Frobenius norm. For the CSS problem with the $\ell_p$ norm, it is much more challenging to use local search than the Frobenius norm with the following reasons: (1) Lemma 3.3 becomes invalid because its inequality does not hold for the $\ell_p$ norm. (2) Lemma 3.4 becomes invalid because it constructs an approximate relationship between the current solution and the SVD. As given in [1, 2, 3], $\ell_p$ norm CSS problem is a special type of $\ell_p$ norm subspace approximation. Consequently, using local search for $\ell_p$ subspace approximation has the following difficulties: (1) For the $\ell_p$ subspace approximation problem, how to analyze the relationship between the solution after adding or removing a column and the optimal solution is challenging. (2) How to get a high-quality initial solution is difficult. The above two difficulties are two major obstacles to design an approximation algorithm for $\ell_p$ norm subspace approximation by local search. Thus, designing a (1+eps) approximation algorithm for the subspace approximation is much more challenging, which deserves further study. [1] Deshpande Amit, and Rameshwar Pratap. On Subspace Approximation and Subset Selection in Fewer Passes by MCMC Sampling. arXiv preprint arXiv:2103.11107 (2021). [2] Deshpande Amit, and Rameshwar Pratap. One-Pass Additive-Error Subset Selection for $\ell_p$ Subspace Approximation and $(k, p)$-Clustering." Algorithmica 85.10 (2023): 3144-3167. [3]Mahankali Arvind V., and David P. Woodruff. Optimal $\ell_1$ column subset selection and a fast PTAS for low rank approximation. Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 560-578, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I understand the extension to $\ell_p$ subspace approximation would not be obvious. I have increased my score.
Summary: The Column Subset Selection (CSS) problem aims to select a sub-matrix with $k$ columns from a matrix to minimize the residual error. Previous algorithms often have non-linear running times or higher approximation ratios. This paper proposes a local search-based algorithm with linear running time, utilizing a two-step mixed sampling method to reduce enumerations and a matched swap pair construction to maintain approximation quality. Empirical experiments demonstrate superior performance in quality and speed compared to existing methods, particularly in large datasets. Strengths: - The proposed new iterative algorithm for column subset selection is simple and sound. - The theoretical analysis of the algorithm appears novel and solid. - In experiments, the proposal is generally the fastest one together with competitively small errors. Weaknesses: - Line 41: What is UG? - Line 132: Is Lemma 2.1 necessary for the main text? - Algorithms 1 and 2: It would be better to give the full name of the algorithms. - Section 3 does not clearly discuss the algorithmic distinction beyond ILS and Greedy. - The theoretical results of the algorithm are mixed with the algorithmic details, making the paper poorly structured. Additionally, the assumptions are not explicitly listed and lack sufficient discussion. - Table 4: The standard error of Greedy hasn’t been reported. Besides, it would be better to put Table 4 into the main text and discuss why LSCSS achieves superior approximation compared to the competitors. - Lines 320 and 321 are repetitive. Technical Quality: 3 Clarity: 2 Questions for Authors: - Algorithm 2 (line 2): How would the numerical performance change if $10k$ columns were replaced with a different number of columns (e.g., $2k$ or even a constant [1])? - Using the objective function to guide which column to swap out shares a similar spirit with recent advanced algorithms for best-subset selection under linear regression for sparsity-constrained optimization [2, 3]. It would be interesting to discuss these works. - How should $k$ be selected in practice? - Table 9: The error ratio of TwoStage on the ComCri dataset is weird. It is exceptionally large when $k=50$ but suddenly reduces to a comparable result when $k=100$. ### Reference [1] Bahmani, Sohail, Bhiksha Raj, and Petros T. Boufounos. "Greedy sparsity-constrained optimization." The Journal of Machine Learning Research 14.1 (2013): 807-841. [2] Wang, Zezhi, et al. "Sparsity-Constraint Optimization via Splicing Iteration." arXiv preprint arXiv:2406.12017 (2024). [3] Zhu, Junxian, et al. "A polynomial algorithm for best-subset selection problem." Proceedings of the National Academy of Sciences 117.52 (2020): 33117-33123. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations in the end of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Weakness 1. Line 41: What is UG?** Response: We thank the reviewer for this question. ``UG-hard" refers to problems that are as hard as the Unique Games problem, based on the Unique Games Conjecture (UGC). If a problem is UG-hard, it means that there is no efficient algorithm to approximate the problem beyond a certain factor, assuming the UGC is true. This suggests the problem is very difficult to solve unless P=NP or the UGC is disproven. **Weakness 2. Is Lemma 2.1 necessary for the main text?** Response: We apologize for the confusion. Lemma 2.1 is used in the proof of Lemma 3.2 to ensure that the third inequality (line 417) holds. Thus, it is necessary. **Question 1. How would the numerical performance change if $10k$ columns were replaced with a different number of $2k$ columns (e.g., or even a constant [1])?** Response: We thank the reviewer for asking this question. The choice of selecting $10k$ columns is based on ensuring the probability of sampling a good column in each iteration. If we change it to $2k$ columns, it would lower the probability of successfully finding a good column. In the following, we provide additional results comparing the selection of $10k$ columns and $2k$ columns for our LSCSS algorithm with varying $k$ values on the 8 datasets (listed in paper). The detailed results, shown in Figure 2 and 3 of the attached PDF, indicate that the accuracy of LSCSS is better with the selection of $10k$ columns than with the selection of $2k$ columns. **Question 2. Using the objective function to guide which column to swap out shares a similar spirit with recent advanced algorithms for best-subset selection under linear regression for sparsity-constrained optimization [2, 3]. It would be interesting to discuss these works.** Response: We thank the reviewer for asking this question. For best-subset selection in regression, Wang et al. [2] proposed the ABESS algorithm using the splicing technique. The method uses backward sacrifice and forward sacrifice operations to decide which variables should be swapped. The splicing technique improves the quality of subset selection by minimizing the sum of residual squares and continuously optimizes the objective function value. For the sparsity-constrained optimization problem, Zhu et al. [3] proposed the SCOPE algorithm. This algorithm generates new candidate sets through local coordinate exchanges and uses the objective value to guide these exchanges, ensuring that the objective value decreases in each iteration. We use a local search algorithm based on two-step mixed sampling to achieve linear-time solutions for the CSS problem. To avoid $O(dk)$ enumerations of swap pairs, we designed the two-step mixed sampling technique, using the objective function to guide which column to swap out, reducing the enumeration of swap pairs from $O(dk)$ to $k$. This leads to the running time of a single local search to $O(ndk^2)$. By analyzing the sampling probability designed based on the objective function, we can provide theoretical guarantees of the local search. In summary, both our algorithm and those in [2,3] use some form of objective or loss function to guide the selection and swapping of variables. However, our algorithm differs from the ones in [2, 3]. The methods in [2, 3] use the objective function to guide the swapping process, identifying columns that improve the objective function value. Our algorithm uses the objective function to design the sampling probability, which guides the swapping process to accelerate the running time of local search step. Additionally, we can establish a theoretical relationship between the solution generated by this swapping strategy and the optimal one. Using the strategies from [2, 3] might make it difficult to establish this kind of approximate relationship between solutions. **Question 3. How should $k$ be selected in practice?** Response: We thank the reviewer for asking this subtle question. As pointed out in [1,2], the value of $k$ is often treated as a constant input, and the choice of $k$ typically ranges between 1 and 500 in experiments. For all the values between 1 and 500, which one is the best is often hard to decide. For the medical data analysis application of CSS, Shanab et al. [3] gave that the selection of $k$ is related to the objective function value and correlation measures. However, Shanab et al. [3] did not give the procedure to find an appropriate $k$. As far as we know, there is no relevant result deciding the choice of $k$ in practice. How to determine the appropriate $k$ value is an interesting problem, and deserves further study. [1] Christos Boutsidis et al. An improved approximation algorithm for the column subset selection problem. In Proc. 20th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 968-977, 2009. [2] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proc. 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1207-1214, 2012. [3] Shanab, S., et al. Stable bagging feature selection on medical data. Journal of Big Data, 8(1):1-18, 2021. **Question 4. Table 9: The error ratio of TwoStage on the ComCri dataset is weird. It is exceptionally large when $k=50$ but suddenly reduces to a comparable result when $k = 100$.** Response: We thank the reviewer for this question. The TwoStage algorithm involves sampling a subset of columns to construct a candidate solution. The ComCri dataset contains 123 columns. When $k = 50$, the algorithm samples a candidate set of 89 columns. Under this case, the accuracy is exceptionally large. When $k = 100$, following the method in [1], the sampled size is larger than 123. Then, the algorithm selects all 123 columns, which makes the subsequent process easier. Thus, the accuracy in such cases reduces to a comparable result.
Summary: This paper studies column subset selection. Given a matrix A n*d, how to select a matrix A_S of k columns, which preserves the substance in A? That is, reconstruction error ||A - (A_S A_S#) A||_F should be minimized, where A_S# is the psuedo-inverse of A_S. There are many approximation algorithms for this problem, and it is widely studied in theoretical ML/CS literature. There is a plethora of running-time vs approximation factor guarantee in the literature. This paper tries to match the SoTA approx. factor with faster running time. Indeed, SoTA algorithm has (k+1)-factor approximation with running time O(n d^2 k). If we wanted linear in n and d, best known algorithm had approx. factor (k+1)! with O(n d k^2) running time. This paper's main contribution is 110 (k+1)-approximation with running time O( n d k^3). Technically, the main innovation is to show that a constrained version of local search works. Local search is a good algorithm where we initialize a decent solution say the (k+1)! approx. solution, and iteratively swap out a column in the current solution with one not in. This step is expensive, having k*d possible combinations and we need to see which is the best! Multiplying this with # of rounds will yield large running times not-linear in d. This paper overcomes this by showng that not all swaps are needed to be tried. Indeed, they sample O(10k) possible columns based on some form of random sampling (by looking at the residual importance of various columns), and further uniformly sample one of them at random to swap in. Now, the complexity of testing swaps is only O(k)! The challenge is now to show that the algorithm will not get stuck in bad local optima. For this authors present a set of "test swaps" based on optimal set of columns and algo set of columns, and show that we will end up sampling one such pair with decent chance, if the current soln is far from optimal. Finally they run experiments comparing with prior art and show better running time AND quality. Overall solid piece of work and worthy of NeurIPS. The randomized local search idea is good, and the backing with empirical evals is also welcome. Strengths: Writing is good. Comparion with prior art is also nicely done. The algo is clean and elegant, the idea of randomized local search to improve running time is novel (at least to me, authors should mention if similar ideas appear in other uses of local search). Good of you to run experiments and validate the niceness of algo! Weaknesses: The problem is important, though it would be nice for authors to present why "subset" selection is significant as opposed to top-k-SVD. It is a bit unfair to expect a full motivation because this is standard problem, but a bit more explanation for the importance of the problem would be nice. Technical Quality: 3 Clarity: 3 Questions for Authors: Are there practical use-cases of the CSS (where the interpretabiltiy is crucial? as opposed to SVD). In typical use-cases, do we sample rows (from n like a coreset) or columns (from d like dimension reduction)? The obj. function f doesn't appear to be defined where it is first used. Can your algo give better guarantees when allowed to pick more than k columns? How does this compare to prior SoTA in this regime? Where is the randomness captured in Thm 3.9? Is it in expectation? WHP? The proof has some details but the formal statement should also have it. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1. Are there practical use-cases of the CSS (where the interpretabiltiy is crucial? as opposed to SVD). In typical use-cases, do we sample rows (from n like a coreset) or columns (from d like dimension reduction)?** Response: We thank the reviewer for bringing up this issue. Although both SVD and CSS can be used for dimensionality reduction, they adopt quite different strategies. Specifically, SVD achieves dimensionality reduction by decomposing the matrix into $U\Sigma V^\top$, aiming to capture the low-rank structure of the input matrix. The SVD method generates new features as linear combinations of the original features, resulting in lower interpretability. In contrast, CSS focuses more on the interpretability of the solution, and selects columns based on their importance in performing specific tasks. Since it retains the original columns, CSS often offers better interpretability. A practical application of CSS is as follows. In remote sensing image classification, as given in [1], CSS selects a subset of the original features, which have clear physical meanings in the context of remote sensing images. As pointed out in [1], the aim of column subset selection is to find the most informative and distinctive features, which provide better interpretability for specific applications. Saurabh et al. [2] used sampling method for column selection. Greedy method was applied in [1] for the remote sensing image classification, and Jason et al. [3] also applied greedy method. Sampling methods were used in [4, 5] to deal with both row and column selection. Our proposed LSCSS is a local search method used for dimension reduction, and has advantages over the state-of-the-art methods for accuracy and time, which have potentials for many applications. [1] Benqin Song, et al. New feature selection methods using sparse representation for one-class classification of remote sensing images. IEEE Geoscience and Remote Sensing Letters 18(10):1761-1765, 2020. [2] Paul Saurabh, et al. Column selection via adaptive sampling. Advances in neural information processing systems 28, pages 406-414, 2015. [3] Altschuler Jason, et al. Greedy column subset selection: New bounds and distributed algorithms. Proceedings of the 33rd International Conference on Machine Learning, pages 2539-2548, 2016. [4] Deshpande Amit, and Luis Rademacher. Efficient volume sampling for row/column subset selection. Proceedings 51th Annual Symposium on Foundations of Computer Science, pages 329-338, 2010. [5] Frieze Alan, et al. Fast Monte-Carlo algorithms for finding low-rank approximations. Journal of the ACM 51(6):1025-1041, 2004. **Question 2. The obj. function f doesn't appear to be defined where it is first used. Can your algorithm give better guarantees when allowed to pick more than k columns? How does this compare to prior SoTA in this regime?** Response: We thank the reviewer for raising the question. The objective function $f$ is defined in line 131 (Section 2). In the revised version, we will make the definition of $f$ clearer. For the CSS problem, selecting more columns can potentially improve the approximation guarantees. Theoretically, we can prove that selecting more than $k$ columns can lead to a decrease in the objective function value, as follows. Let $S_k$ be the solution with $k$ columns, and let $S_{k+1} = S_k \cup v$ be the solution with one additional column $v$. According to line 422 of this paper, $f(S,A) = \text{tr}(A^\top A) - \Vert S S^\dagger A \Vert_F^2$. Therefore, $f(S_{k+1},A) - f(S_k, A) = \Vert S_k S_k^\dagger A \Vert_F^2 - \Vert S_{k+1} S_{k+1}^\dagger A \Vert_F^2$. From Lemma 1 in [1], we have $\Vert S_k S_k^\dagger A \Vert_F^2 - \Vert S_{k+1} S_{k+1}^\dagger A \Vert_F^2 \le 0$. This implies that the function $f(A,S)$ is monotonically non-increasing. Thus, if the local search outputs a solution $S_k$ with $k$ columns, then by adding a column to form $S_{k+1}$, we can at least guarantee that $f(S_{k+1},A) \le f(S_k, A)$. To demonstrate that the objective function value decreases as the number of columns selected is larger, we conduct experiments using our proposed LSCSS with varying $k$ values on 8 datasets (listed in paper). We use the objective function $f(S,A) = \Vert A - SS^\dagger A \Vert_F^2$ to measure this property. The detailed results, shown in Figure 1 of the attached PDF, suggest that the objective function value indeed decreases as the number of selected columns increases. Comparing two algorithms on the same dataset with different numbers of selected columns is unfair. When the same number of columns is selected, the algorithm can be compared with the SoTA. [1] Altschuler Jason, et al. Greedy column subset selection: New bounds and distributed algorithms. Proceedings of the 33rd International Conference on Machine Learning, pages 2539-2548, 2016. **Question 3. Where is the randomness captured in Thm 3.9? Is it in expectation? WHP? The proof has some details but the formal statement should also have it.** Response: We apologize for not clearly describing the role of randomness in our submission. In Theorem 3.9, the solution $S$ returned by Algorithm 1 satisfies $\Vert A-SS^\dagger A\Vert_F^2 \le 110(k+1)\Vert A -A_k\Vert_F^2$ in expectation. In the revised version of the paper, we will make it clearer.
Rebuttal 1: Rebuttal: We thank all the reviewers for the positive ratings and thoughtful comments. Pdf: /pdf/7f6a983bc81e6da7d1f27022187220b45803f926.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper addresses the classical CSS problem, and describes significant improvements over the current state of the art. The main idea is to run the randomized adaptive sampling algorithm for selecting k columns, and follow it by several iterations of swapping (local search), that further increase the accuracy. The swapping replaces one of the selected columns with another unselected column when such swapping reduces the error. Strengths: The main novelty is in the design of the local search. The algorithm first selects 10k top norm candidates, and then selects uniformly at random among those. Even though the analysis does not show their algorithm to be the most accurate, the experimental results clearly demonstrate the superiority of this approach. Weaknesses: * There is no experimental comparison with the QRP (QR with Pivoting) method, which is known to show very good results in practice. It is significantly faster than your algorithm, as it takes: 4knd - 2 k^2(d+n) + 4 k^3/3 * The running time of the proposed algorithm is O(nd k^3 log k) I do not consider it fast. There are other algorithms that run much faster (eg Frieze, Kannan and Vempala, JACM 2004, and the above mentioned QRP). I am also not that impressed by the relative bound of 110(k+1), as there are other algorithms with 100 times better bounds (specifically k+1). What is most surprising here is your experimental results. Some notes: line 41. CSS is known to be NP-hard. See "Column subset selection is NP-complete" by Shitov, 2021. line 42. The comment is incorrect wrt reference 2. Technical Quality: 3 Clarity: 3 Questions for Authors: * Why is there no comparison with the QRP? * How good is your accuracy compared to the optimal of worst case of (k+1)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1. Why is there no comparison with the QRP (QR with Pivoting) method?** Response: The main reason that QRP is not compared with our CSS method is because they select columns for different purposes. In the QR decomposition process, QRP chooses the most influence column at each step to maximize the numerical stability (i.e., it selects the column with the largest norm as the pivot column and swaps it with the current one), while CSS selects columns to best represent the input matrix. Many CSS algorithms identify the most representative columns (for dimension reduction) by using objective functions or volume sampling to measure column importance. Even though both QRP and CSS choose a subset of columns, they use them for different objectives. Thus, we have focused our comparisons on those methods that solve CSS. Since the subset of columns obtained by QRP can also be viewed as the solution of dimension reduction. For completeness, in the following, we compare our proposed algorithm with the subset of columns obtained by QRP. We conducted experiments on 5 datasets: CMHS, ELD, Gas, FAds, and TGas (listed in Table 2 of our submission). For the QRP method, we first call the algorithm in [2] to obtain the permutation matrix $P$ that satisfies $AP=QR$. Then, we select the top $k$ elements of the diagonal of $P$ as the indices $C$ of the $k$ columns and compute the error ratio of the solution formed by $C$ as mentioned in line 290. The detailed results in Table 1 show that our method achieves the lowest error ratios across all datasets and is faster than the QRP method. | Dataset | k | QRP Ratio | QRP Time | LSCSS Ratio | LSCSS Time | | ------- | --- | --------- | -------- | -------------------- | ---------- | | CMHS | 10 | 3.3580 | 1289.87 | 1.1478 $\pm$ 0.1094 | 5.17 | | CMHS | 20 | 3.5838 | 1289.90 | 1.1815 $\pm$ 0.0586 | 30.45 | | CMHS | 30 | 4.0852 | 1289.91 | 1.1988 $\pm$ 0.0772 | 52.82 | | CMHS | 50 | 4.0925 | 1289.93 | 1.2535 $\pm$ 0.0560 | 62.36 | | CMHS | 100 | 4.6109 | 1289.98 | 1.3334 $\pm$ 0.0194 | 108.98 | | ELD | 10 | 2.5603 | 124.93 | 1.2929 $\pm$ 0.0652 | 3.47 | | ELD | 20 | 2.8871 | 124.94 | 1.3656 $\pm$ 0.0225 | 13.93 | | ELD | 30 | 3.5369 | 124.95 | 1.4834 $\pm$ 0.0761 | 35.03 | | ELD | 50 | 4.3966 | 124.97 | 1.4946 $\pm$ 0.1086 | 66.27 | | ELD | 100 | 4.8920 | 125.03 | 1.7470 $\pm$ 0.0692 | 89.94 | | Gas | 10 | 2.1567 | 913.60 | 1.6315 $\pm$ 0.0747 | 11.67 | | Gas | 20 | 2.4450 | 913.62 | 1.7466 $\pm$ 0.0542 | 47.90 | | Gas | 30 | 2.8961 | 913.62 | 1.6915 $\pm$ 0.0372 | 322.92 | | Gas | 50 | 2.8465 | 913.63 | 1.8173 $\pm$ 0.0287 | 386.15 | | Gas | 100 | 3.6717 | 913.65 | 1.9362 $\pm$ 0.0344 | 850.01 | | FAds | 10 | 1.1805 | 6487.52 | 1.0748 $\pm$ 0.0208 | 10.58 | | FAds | 20 | 1.2224 | 6487.54 | 1.0815 $\pm$ 0.0126 | 18.27 | | FAds | 30 | 1.2531 | 6487.65 | 1.0898 $\pm$ 0.0097 | 22.76 | | FAds | 50 | 1.2886 | 6487.62 | 1.0853 $\pm$ 0.0087 | 36.04 | | FAds | 100 | 1.3430 | 6487.74 | 1.1334 $\pm$ 0.0085 | 95.67 | | TGas | 10 | 4.0830 | 1239.06 | 1.3696 $\pm$ 0.0731 | 30.28 | | TGas | 20 | 7.1182 | 1239.07 | 1.7932 $\pm$ 0.0764 | 49.90 | | TGas | 30 | 10.4892 | 1239.10 | 1.8677 $\pm$ 0.1050 | 77.93 | | TGas | 50 | 18.9009 | 1239.16 | 2.0565 $\pm$ 0.1475 | 114.23 | | TGas | 100 | 23.9558 | 1239.42 | 2.1499 $\pm$ 0.0883 | 243.89 | *Table 1: Comparison results of QRP and LSCSS algorithms* [1] Quintana-Ortí, Gregorio, Xiaobai Sun, and Christian H. Bischof. A BLAS-3 version of the QR factorization with column pivoting. SIAM Journal on Scientific Computing 19.5 (1998): 1486-1494. **Question 2. How good is your accuracy compared to the optimal of worst case of (k+1)?** Response: We thank the reviewer for asking this question. Chierichetti et al. [1] proposed an algorithm to obtain the (k+1) ratio by enumerating all $k$-column subsets in the worst case, with exponential time complexity ($n^k$ subsets), which makes it hard to handle large-scale datasets. Theoretically, our proposed LSCSS achieves a $110(k+1)$-approximation. To compare the accuracy of our proposed LSCSS with the enumeration algorithm in [1], we follow the setting in [2] and construct a synthetic dataset consisting of a $1000 \times 20$ random matrix $A$, where $A_{i,j}$ are i.i.d. from a normal distribution. We provide additional results comparing our LSCSS algorithm with the enumeration algorithm on synthetic dataset. The detailed results show that the error of our method is only $0.05\\%$ higher than that of the enumeration algorithm. | k | Enumeration | LSCSS | | --- | ----------- | -------------------- | | 5 | 1.0421 | 1.0421 $\pm$ 0.0002 | | 10 | 1.0892 | 1.0898 $\pm$ 0.0001 | | 15 | 1.1379 | 1.1390 $\pm$ 0.0007 | *Table 2: Error ratio of the Enumeration algorithm and LSCSS on a synthetic dataset* [1] Chierichetti Flavio, et al. Algorithms for $\ell_p $ low-rank approximation. Proceedings of the 34th International Conference on Machine Learning, pages 806-814, 2017. [2] Paul Saurabh, et al. Column selection via adaptive sampling. Advances in neural information processing systems 28, pages 406-414, 2015.
null
null
null
null
null
null
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers
Accept (poster)
Summary: This manuscript proposed a layer-wise post-training quantization (PTQ) algorithm called aespa aiming at quantizing large transformer models, and refined quantization objectives for the attention layer which can accelerate the quantization process by pre-computation. The authors gave clearly descriptions for the formulas which are used as quantization objectives and showed analysis results that aespa has a lower computational cost. The authors also showed stable experiment results comparing aespa with other quantization methods. Overall, I would suggest an Accept. Strengths: 1. This PTQ algorithm is proved by mathematical methods, so it is easy to be applied to other transformer models. 2. The proposed algorithm considers the cross-layer dependency by targeting attention modules. In detail, it quantizes query, key, and value in an attention module separately: when quantizing one projection, the other 2 projections are fixed with full precision, decreasing the heavy computation of block-wise quantization. 3. Compared with conventional block-wise quantization and layer-wise PTQ schemes, the perplexity of aespa showed good results Weaknesses: 1. This quantization algorithm only focuses on the attention module, targeting the cross-layer dependency within an attention module, not cross-layer dependency within the full model. 2. In Experiments and Appendices, most tables compare the performance (perplexity), time, and memory cost. There is only one table (Table 10) showing the accuracy of zero-shot performance. There is no explicit result showing how aespa is more accurate than other methods. How much is the decrease in the accuracy compared with the original floating point models? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. When quantizing a transformer model with the proposed algorithm, will all weights and activations stay the same quantization precision? Or only the weight will be quantized? 2. Will it be possible to discuss more on the accuracy of aespa compared to other methods? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors dicusse dthe limitation of aespa: this algorithm only focused on the attention module as a block. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive evaluation and invaluable comments on our work. Our point-to-point responses are as follows. **1. Consideration of the cross-layer dependencies within the full model (Weakness 1)** - We appreciate the reviewer's invaluable comments. By considering the reconstruction error for the entire Transformer block or the whole network, rather than the attention module, the dependencies between more layers could be considered, which would result in further enhancement. - However, due to nonlinear activation functions (e.g., SiLU for LLaMA), normalization layers, and weights of higher dimensions, considering the reconstruction error for the larger block (e.g., Transformer block) would result in complex objective functions, significantly more complicated than those developed for the attention module (see Eqs. (13) and (25)): $$\Delta \text{SA} _{Q} = \frac{1}{d} \mathbb{E} \left [ || \mathbf{V} ^{T} \mathbf{J} _{\text{softmax}} \mathbf{K} \Delta \mathbf{W} _{Q} \mathbf{X} || _{F}^{2} \right ], \Delta \text{SA} _{K} = \frac{1}{d} \mathbb{E} \left [ || \mathbf{Q} \Delta \mathbf{W} _{K} \mathbf{X} \mathbf{J} _{\text{softmax}}^{T} \mathbf{V} || _{F}^{2} \right ].$$ In fact, although we established exact reconstruction errors for the attention module as above, we employed relaxed versions in Eqs. (15) and (16) for efficient quantization (i.e., reasonable processing time and quantization with a single GPU). Therefore, when trying to consider cross-layer dependencies between more layers, simplifying the reconstruction error as in our approach would not be a good option, and in this case, just exploiting the original reconstruction error would be better. - It is worth noting that recent approaches have tried to consider all layers using the reconstruction error for the entire Transformer block (e.g., OmniQuant [1] and AffineQuant [2]). However, to alleviate the computational overhead, they rely on a naive rounding-to-nearest weight-rounding method and the gradient approximation. As a result, although considering dependencies between more layers, they suffer from an unstable low-bit quantization process and exhibit inferior performance than the proposed method targeting the attention module (see Table 1). - In the final version, we will discuss why we concentrate on the attention module, not the entire Transformer block. We hope for the reviewer's kind evaluation and acknowledgment of our effort to balance accuracy and efficiency for the real-world deployment of LLMs. [1] W. Shao et. al., "OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models," ICLR 2024. [2] Y. Ma et. al., "AffineQuant: Affine Transformation Quantization for Large Language Models," ICLR 2024. **2. More accuracy results: There is only one table (Table 10) showing the accuracy of zero-shot performance. There is no explicit result showing how aespa is more accurate than other methods. How much is the decrease in the accuracy compared with the original floating point models? (Weakness 2 and Question 2)** - We appreciate the reviewer's constructive suggestion. As suggested, we have newly evaluated the zero-shot performance on LLaMA2 models (see Table I in the PDF attached to the global response) and the zero-shot performances of the proposed method for higher bit-widths (see Table II in the attached PDF). - From Table I, we observe that the proposed method outperforms the conventional approaches also on LLaMA2 models with respect to the zero-shot performance. - From Table II, we observe that the proposed method almost preserves the performance of the original full-precision model for the INT6 quantization. Even for the INT4 quantization, the performance difference between the full-precision and quantized models is very marginal (e.g., less than 1% degradation for 13B and 30B models). - In the final version, we will incorporate these results to enrich the zero-shot performance comparison. **3. Quantization setting: When quantizing a transformer model with the proposed algorithm, will all weights and activations stay the same quantization precision? Or only the weight will be quantized? (Question 1)** - We appreciate the reviewer's comment. As mentioned in Section 4.1, we quantized only weights and retained activations in full-precision as in the existing works [3]-[6] because activations do not pose a significant bottleneck and the inference of LLMs can be sufficiently accelerated by reducing memory movement via weight quantization [6], [7]. - Nevertheless, when the activations need to be quantized, we can combine existing orthogonal techniques for suppressing activation outliers [8], [9] with the proposed method, which will be considered in our future studies. [3] J. Chee et. al., "QuIP: 2-Bit Quantization of Large Language Models With Guarantees,'' NeurIPS 2023. [4] J. Lin et. al., "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration,'' MLSys 2024. [5] Y. Jeon et. al., "A Frustratingly Easy Post-Training Quantization Scheme for LLMs,'' EMNLP 2023. [6] E. Frantar et. al., "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers," ICLR 2023. [7] S. Kim et. al., "SqueezeLLM: Dense-and-Sparse Quantization,'' ICML 2024. [8] G. Xiao et. al., "SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models,'' ICML 2023. [9] S. Ashkboos et. al., "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs,'' arXiv 2404.00456. Again, we express our sincere thanks to the reviewer for the invaluable comments and constructive suggestions, which will be very useful in improving the quality of our work. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: Thanks for the authors further work. I would maintain my scores. --- Reply to Comment 1.1.1: Comment: Dear Reviewer QBHg, Thanks for your time and efforts you have dedicated to reviewing our paper. We believe that due to your invaluable comments and constructive suggestions, the proposed method could serve as a useful quantization solution that balances accuracy and efficiency. Best regards, Authors of Paper 3318
Summary: The paper introduces 'aespa,' a novel post-training quantization (PTQ) algorithm designed to balance accuracy and efficiency in quantizing hyper-scale Transformer models. This method quantizes layer-wise for efficiency and employs attention-wise reconstruction to account for cross-layer dependencies. Extensive experiments and complexity analysis demonstrate that aespa outperforms existing PTQ methods, especially in low-bit quantization scenarios. Strengths: 1. Aespa, which combines layer-wise quantization with attention-wise reconstruction, is a significant advancement in the field of model quantization. 2. This paper emphasizes a pre-computation-based loss computation strategy, demonstrating its significant speed advantage over traditional block-wise PTQ methods like BRECQ. 3. Extensive experiments on diverse language models and datasets provide a comprehensive evaluation of aespa's performance. 4. Enhancing the credibility of this method would benefit from conducting thorough ablation studies. Weaknesses: 1. Further experiments on LLMs are crucial, particularly with the LlaMa 2 & 3 family. 2. How were other layers updated within a transformer block, apart from the attention module? Were they done layer-wise or block-wise? 3. The implementation details require clarification. Specifically, in the attention module, Q, K, and V are updated separately. How many iterations are necessary for each layer? Does the order of updating Q, K, and V impact performance? Technical Quality: 3 Clarity: 3 Questions for Authors: please refer to Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: A more in-depth discussion on implementation details would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's acknowledgment of our work and constructive suggestions. Our point-to-point responses are as follows. **1. Further experiments on LLMs such as LLaMA2 and LLaMA3 (Weakness 1)** - We appreciate the reviewer's constructive suggestion. Table I (see the PDF attached to the global response) summarizes the performances of the proposed method and conventional approaches on LLaMA2 models. As evident, the proposed method outperforms conventional approaches with respect to both perplexity and zero-shot accuracy performance. We will add this result to the main text in the final version. - It requires more effort to measure the quantization performance on LLaMA3 models because LLaMA3 models adopted a different type of attention operation (grouped query attention (GQA) instead of the standard multi-head attention) and most of the conventional algorithms used for comparison do not support LLaMA3 models yet. For LLaMA3 models, we will try to update the results in the final version. **2. How were other layers updated within a transformer block, apart from the attention module? Were they done layer-wise or block-wise? (Weakness 2)** - As mentioned in Footnote 5 and Table 2 (see Appendix A), other layers apart from the attention module are quantized with the layer-wise objective in Eq. (6). In the final version, we will incorporate this explanation into the main text. - It is worth noting that if we use the reconstruction error for the entire Transformer block (rather than the attention module), then other layers apart from the attention module could also be quantized by targeting the block-wise reconstruction, which would lead to further enhancement. However, due to nonlinear activation functions (e.g., SiLU for LLaMA), normalization layers, and weights of higher dimensions, such a method would result in complex objective functions, significantly more complicated than those developed for the attention module (see Eqs. (13) and (25)). In the final version, we will discuss this trade-off between accuracy and efficiency. **3. The implementation details require clarification. Specifically, in the attention module, Q, K, and V are updated separately. How many iterations are necessary for each layer? Does the order of updating Q, K, and V impact performance? (Weakness 3)** - We appreciate the reviewer's comment. In our experiments, we set the number of iterations to 2,000 for all layers, as reported in Section 4.1. - The performance of the proposed method is NOT affected by the quantization order of $\mathbf{Q}$, $\mathbf{K}$, and $\mathbf{V}$. Before quantizing each Transformer block, we retain all layers inside the Transformer block in full-precision and then compute the values required for an efficient weight-rounding optimization (i.e., $\mathbb{E} [\mathbf{X} \mathbf{X} ^{T}]$ for each layer, $\mathbb{E} [\mathbf{K} ^{T} \mathbf{K}]$ for the query projection, $\mathbb{E}[\mathbf{Q} ^{T} \mathbf{Q}]$ for the key projection, and $\mathbb{E}[\mathbf{X} \mathbf{A} ^{T} \mathbf{A} \mathbf{X} ^{T}]$ for the value projection; see Table 2 in Appendix A). Because the quantization of each layer depends only on these pre-computed values obtained from the full-precision model, each layer is NOT affected by the quantization of other layers. In the final version, we will clarify this issue in the experimental setup section (Section 4.1). Again, we express our sincere thanks to the reviewer for the invaluable comments and constructive suggestions, which will greatly assist in improving our work.
Summary: As a cost-effective alternative, learning-free PTQ schemes have been proposed for LLMs. Still, the performance is somewhat limited because they cannot consider inter-layer dependency within the attention module, a significant feature of Transformers. This paper propose a PTQ algorithm that balances accuracy and efficiency. The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency. Through extensive experiments on various language models and complexity analysis, it demonstrates that aespa is accurate and efficient in quantizing Transformer models. Strengths: The scheme aims to reconstruct the attention output to consider cross-layer dependency while quantizing models layer-wise to pursue efficiency. It is different from previous layer-wise error minimization. From extensive experiments on language models, it demonstrates that the approach outperforms conventional schemes by a significant margin, particularly for low-bit precision (INT2). To accelerate the quantization process, it proposes refined quantization objectives for the attention module. Through a complexity analysis, it demonstrates that about 10 times faster quantization than existing block-wise approaches can be achieved by exploiting the proposed objectives. Weaknesses: I am a bit confused about the results of baselines. For OmniQuant, in the original paper, the perplexity on wikitext2 for 2bit weight quantization on llama-7B, 13B, and 30B are 15.47, 13.21, 8.71, respectively. However, in this paper, they become 24.28, 22.85,13.04. The differences are too large. Though the authors mention that the difference is due to different calibration dataset, the ablation study in OmniQuant demonstrates that different calibration datasets (wikitext2, C4, and pile) do not lead to significant perplexity difference with very small variance, see table A10 of OmniQuant. So the performance of OmniQuant using another calibration dataset such as C4, should also achieve a similar performance close to the original paper. However, the results in this paper are significantly different from the original paper. It is better to provide more discussions about this problem. The OmniQuant and AffineQuant methods demonstrate the performance of quantization in groups. Smaller groupsize leads to better performance. It is better to include some ablation study with different groupsizes. LLaMA3 is published. It is better to have some results for recent models such as LLaMA2. Most of the results are shown in the appendix. It is better to organize the experimental part better to include the main results in the main paper. Currently there are only block-wise perplexity comparison in the main paper. It may be better to put zero-shot evaluation in the main paper. Technical Quality: 2 Clarity: 3 Questions for Authors: See the weakness. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's invaluable comments and constructive suggestions. Our point-to-point responses are as follows. **1. Results for OmniQuant (Weakness 1)** - We appreciate the reviewer's invaluable comment and careful reading. We mention that we used the official GitHub code without any modification when evaluating the performance of OmniQuant. - To check the performance variance of OmniQuant across different calibration datasets, we have changed a calibration dataset (i.e. C4 $\rightarrow$ Wikitext-2 as in the original paper) and measured the performance of OmniQuant. As Table IV (see the PDF attached to the global response) shows, the low variance of OmniQuant does NOT hold for INT2 weight quantization. In fact, the low variance has NOT been verified for INT2 weight quantization in the original paper and has been shown only for INT3 and INT4 (see [1, Table A10]). - Furthermore, we observe from Table IV that regardless of the type of the calibration dataset, the proposed method outperforms OmniQuant, which demonstrates the superiority of the proposed method. In the final version, we will incorporate this result (Table IV) and discuss the performance variance issue. [1] W. Shao et. al., "OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models," ICLR 2024. **2. Group-wise quantization (Weakness 2)** - We appreciate the reviewer's constructive suggestion. In this work, we did not consider group-wise quantization because it requires more memory costs and processing time for the inference than per-channel quantization [2, Tables 3 and 4]. - As suggested by the reviewer, we have newly evaluated the group-wise quantization performance of the proposed method (see Table III in the attached PDF). From the results, we observe that regardless of quantization algorithms, smaller groupsize leads to better performance, as mentioned by the reviewer. As evident, the proposed method outperforms conventional methods also for group-wise quantization. - In the final version, we will add this result to demonstrate the group-wise quantization performance of the proposed method. [2] H. Shen et. al., "Efficient LLM Inference on CPUs," arXiv 2311.00502. **3. Results for recent models such as LLaMA2 (Weakness 3)** - We appreciate the reviewer's constructive suggestion. Table I (in the attached PDF) summarizes the performances of the proposed method and conventional approaches on LLaMA2 models. As evident, the proposed method outperforms conventional approaches with respect to both perplexity and zero-shot accuracy performance. We will add this result to the main text in the final version. **4. Rearrangement of contents: most of the results are shown in the appendix. It is better to organize the experimental part better to include the main results in the main paper. Currently, there are only block-wise perplexity comparisons in the main paper. It may be better to put zero-shot evaluation in the main paper. (Weakness 4)** - We appreciate the reviewer's constructive suggestion. In the final version, we will enrich the experiment section by simplifying the technical derivations and including experimental results, such as zero-shot evaluation results, results on LLaMA2 models, and group-wise quantization results, in the main text. Again, we express our sincere thanks to the reviewer for the invaluable comments and constructive suggestions, which will greatly assist in enhancing the quality of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. Most of my concerns are addresses and I updated my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer E1kR, We are very glad to hear that your concerns have been addressed! Thanks for your time and efforts you have dedicated to reviewing our paper. Best regards, Authors of Paper 3318
Summary: This paper propose a new quantization strategy that balances accuracy and efficiency that reconstruct the attention output to consider cross-layer dependency. To accelerate the quantization process, this paper proposes refined quantization objectives for the attention module. This approach gets better results in low-bit precision. Strengths: 1 This article does a lot of experiments, which are also in the appendix section. 2 This paper obtains higher accuracy at ultra-low bit than any other work. Weaknesses: 1 Formula derivation is too redundant. 2 Paper does not make clear the trade-off between accuracy and efficiency in introduction. Technical Quality: 3 Clarity: 3 Questions for Authors: 1 In Figure 2, what is Layer, what is Block? 2 Following the method in paper, will the training time longer than ordinary PTQ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I did not identify any issues related to limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's positive evaluation and invaluable comments on our work. Our point-to-point responses are as follows. **1. Redundant formula derivation (Weakness 1)** - We appreciate the reviewer's comment. In the final version, we will simplify the technical derivations and instead enrich the experiment section by including more results in the main text. **2. In Figure 2, what is "Layer" and what is "Block"? (Question 1)** - Figure 2 is a simplified illustration of our quantization strategy that performs quantization in a layer-wise manner while targeting block-wise reconstruction. In the context of Transformers, "block" means the attention module, and "layer'' denotes each of layers inside the attention module (i.e., query, key, and value projections). In the final version, we will add this explanation to the caption of Figure 2 for a better understanding of our quantization strategy. **3. Following the method in paper, will the training time be longer than ordinary PTQ? Paper does not make clear the trade-off between accuracy and efficiency in introduction (Weakness 2 and Question 2)** - We appreciate the reviewer's comment. The processing times of the proposed method and conventional PTQ approaches have been reported in Appendix K (see Table 11(a)). The proposed method completes quantization much faster than the conventional BRECQ which we aim to simplify in this work; e.g., the proposed method completes quantization of OPT-1.3B in 1.24 hours while BRECQ requires more than 10 hours. - While other methods proposed for the quantization of LLMs (OPTQ, Z-Fold, OmniQuant, and AffineQuant) are faster than the proposed method, they suffer from unstable or unsatisfactory low-bit performance (see Table 1 and Table 7). In the final version, we will elaborate on the trade-off between low-bit quantization performance and processing time in the introduction. - Finally, we note that the proposed method would be an intriguing solution in real situations where the performance of original full-precision models needs to be preserved as much as possible (see Table 1 and Table 7). Even when faster quantization is needed, the proposed method can be efficiently used by skipping the weight-rounding optimization step (lines 5-8 in Algorithm 1) and just performing the quantization parameter computation step (line 4 in Algorithm 1). In doing so, we can greatly reduce the processing time and memory costs required for the quantization (see Table 12), yet the proposed method still exhibits better performance than conventional approaches (see Table 4). In the final version, we will incorporate this discussion into the main text. Again, we express our sincere thanks to the reviewer for the acknowledgment of our work.
Rebuttal 1: Rebuttal: We are truly thankful for invaluable comments provided by reviewers. During the rebuttal, we have made our best effort to address all the comments raised by reviewers. In this global response, we summarize our main contribution and emphasize essentiality of our work. **<Main contribution>** - In this work, we have aimed to mitigate computational overhead of the conventional block-wise weight-rounding optimization method (BRECQ [1]), thereby achieving the balance between accuracy and efficiency for the quantization of LLMs. We believe that this contribution is essential and not negligible because without the proposed mitigation, the conventional BRECQ would not be suitable for the quantization of hyper-scale LLMs due to the following reasons: - BRECQ requires too much processing time. Even for relatively small-sized models such as OPT-1.3B, BRECQ requires more than 10 hours, which is about 8 times longer than that required by our method (see Table 11(a) in Appendix K). Obviously, it would be difficult to conduct multiple times of hyper-parameter searches (e.g., learning rate, the weight of the rounding loss ($\lambda$ in Eq. (27)), and the choice of calibration datasets), and without such a time-consuming hyper-parameter search, BRECQ shows inferior performance than ours (see Table 1). - Multi-GPUs are indispensable for BRECQ while our method can quantize hyper-scale LLMs using a single A100 GPU. This is because BRECQ needs to save forward and backward pass information of all layers inside the attention module to perform back-propagation during the quantization. In fact, we encountered the out-of-memory (OOM) issue when trying to quantize LLMs that have more than 7B parameters (see Table 11(b)). Due to the lack of scalability, BRECQ has not been considered as a feasible solution for the quantization of LLMs in many pieces of literature [2]-[5]. In contrast, the proposed method conducts quantization in a layer-wise manner (yet still targeting the reconstruction of the attention output) and thus back-propagation information of only one layer needs to be stored at a time, which enables us to quantize LLMs with 30B parameters using a single GPU. - In this work, to strike an optimal balance between efficiency and accuracy, we propose a novel layer-wise quantization approach that considers cross-layer dependency, and we also present an efficient loss computation method through a novel mathematical analysis (see Sections 3.3 and 3.4 in the main text). In doing so, we could achieve robust quantization performance even at the low bit-width (see Table 1 and Table 7) while conducting the quantization using a single GPU with a much shorter processing time than that required by BRECQ (see Table 11(a)). [1] Y. Li et. al., "BRECQ: Pushing the limit of post-training quantization by block reconstruction,'' ICLR 2021. [2] E. Frantar et. al., "OPTQ: Accurate quantization for generative pre-trained Transformers,'' ICLR 2023. [3] J. Chee et. al., "QuIP: 2-bit quantization of large language models with guarantees,'' NeurIPS 2023. [4] Y. Jeon et. al., "A frustratingly easy post-training quantization scheme for LLMs,'' EMNLP 2023. [5] W. Shao et. al., "OmniQuant: Omnidirectionally calibrated quantization for large language models,'' ICLR 2024. Pdf: /pdf/39fb5a2814c747f80a3872ea1ee7175d06b89ccf.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a post-training quantization approach for transformers. The proposed algorithm finds a better balance between accuracy and efficiency. The method is to perform quantization in a layer-wise way for efficiency and the optimization objective is to maintain the reconstruction of the quantized model to obtain good accuracy. Experimental results on 3 datasets over 2 transformers (OPT and LLaMA) are reported at the low bit widths (3 Bits and 2 Bits). The proposed approach outperforms other baselines at the low bit widths. Strengths: - This paper is well written and organized in general. - The mathematical formulation and derivation are clear and easy to follow. - Results on low bit widths outperform several baselines including BRECQ, OmniQuant, and AffineQuant. - Experimental results on two sota LLMs (OPT and LLaMA) are reported. Weaknesses: - The novelty in this paper is limited, as similar formulation and optimization methods have been widely used in prior papers. - Experiments are not solid. Some important results and comparisons are missing. - The paper lacks analysis and results to support the claims of the contributions. Please check the questions section for details. Technical Quality: 2 Clarity: 3 Questions for Authors: - The idea of reducing reconstruction error for quantization is not new - several prior works have already proposed this idea and adopted it in their works. The optimization method (iterative optimization - updating one variable and fixing others) is not new. - Actually, this paper only considers the reconstruction of the current quantization layer, which is sub-optimal. A better way is to optimize the reconstruction of the last layer (the reconstruction of the whole network). Because output errors could accumulate, small output errors at the first layers may change to large ones when going to the last layers. Only the reconstruction of the last layer's output can reflect the impact on the network. - This paper only reports results at low bit widths (4, 3, and 2 Bits). Although it outperforms the three baselines (BRECQ, OmniQuant, AffineQuant), the accuracy loss is noticeable at low bit widths. How about the results at higher bit widths (6 bits or 8 bits)? Is that possible to maintain the accuracy at higher bit widths? People want to see quantization to a certain number of bits without losing accuracy. - One important contribution claimed by this paper is the capability to handle situations where frequent model updates and multiple hyperparameter tunings are required. However, we haven't found practical results about the analysis and comparison of such situations. Although the authors gave the time complexity of the algorithm and provided the time to run the algorithm, I think this is different with the real situation when weights are frequently updated. Moreover, some baselines also show faster speed (OPTQ, Z-FOLD) than the proposed method. - I suggest that the authors cite the latest related post-training quantization works for generative models or diffusion models [1][2][3][4][5], discuss the differences with them, and compare with them if possible. [1] Towards Accurate Post-training Quantization for Diffusion Models [2] TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models [3] Post-training Quantization on Diffusion Models [4] Q-Diffusion: Quantizing Diffusion Models [5] PTQD: Accurate Post-Training Quantization for Diffusion Models Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer's invaluable comments. **1. Novelty (Question 1)** - Our contribution does NOT lie in the idea of reducing reconstruction error. Our primary contribution is to mitigate the computational overhead of the conventional block-wise reconstruction error minimization method (BRECQ) through a novel quantization strategy coupled with a time- and memory-efficient error computation method. Please take a look at our global response for this comment. **2. Reconstruction of the last layer's output (Question 2)** - While network-wise reconstruction can prevent the error accumulation, inference of the whole network is needed for every input in each quantization step to measure the reconstruction error of the last layer's output. Obviously, this process is time-consuming and not feasible for LLMs with billions of parameters. Indeed, it has been reported that network-wise reconstruction takes 30 minutes even for ResNet18 having 11M parameters [6], while our method quantizes the 125M model in 5 minutes (see Table 11(a) in Appendix K). - Moreover, multiple GPUs are indispensable for network-wise reconstruction because forward and backward pass information of all layers in the whole network needs to be saved for backpropagation. Even for block-wise reconstruction (BRECQ) that considers a single Transformer block, we encountered out-of-memory (OOM) issue when quantizing 13B models with a single GPU (see Table 11(b)). Whereas, our method conducts quantization in a layer-wise manner (yet still targeting the reconstruction of the attention output), and thus backpropagation information of only one layer needs to be stored at a time, which facilitates the quantization of 30B models using a single GPU. - We note that such overhead of the block- and network-wise reconstruction can be mitigated via recent approaches that learn only a small number of quantization parameters, instead of optimizing a weight-rounding policy (e.g., OmniQuant and AffineQuant). However, due to the gradient approximation involved in the quantization parameter learning, they often suffer from an unstable low-bit performance and perform worse than our method (see Table 1). - In the final version, we will discuss why we focus on the output of the attention module, not the output of the whole network. We hope for the reviewer's kind evaluation on our effort to balance accuracy and quantization efficiency. [6] C. Wang et. al., "Leveraging inter-layer dependency for post-training quantization," NeurIPS 2022. **3. Results for higher bit-widths (Question 3)** - In Table II (see the PDF attached to the global response), we summarize INT4 and INT6 quantization performances of the proposed method. We observe that our method almost preserves the performance of the original full-precision model for the INT6 quantization. Even for the INT4 quantization, the performance degradation is very marginal (e.g., less than 1% degradation for 13B and 30B models). **4. Why the proposed method is suitable when models are updated frequently (Question 4)** - In real situations, the deployment process of LLMs usually consists of three steps: training, quantization, and evaluation on target devices such as mobile. From the perspective of quantization, it is important to 1) preserve the performance of original full-precision models as much as possible and 2) complete the quantization in a reasonable time to frequently verify whether updated pre-trained models satisfy the target performance on target devices. - In this context, the conventional BRECQ would not be a practical solution because it requires too much processing time (e.g., almost 20 hours for OPT-2.7B; see Table 11(a)) and needs multiple searches of hyperparameters (e.g., learning rate, rounding loss weight ($\lambda$ in Eq. (27)), and calibration dataset choice) without which BRECQ performs worse than our method (see Table 1). - While other methods (OPTQ, Z-Fold, OmniQuant, and AffineQuant) are faster, they suffer from unstable or unsatisfactory low-bit performance (see Tables 1 and 7). We emphasize that our method would be an intriguing solution in real situations where the performance of original models needs to be preserved as much as possible (see Tables 1 and 7). Even when faster quantization is needed, our method can be efficiently used by skipping the weight-rounding optimization step (lines 5-8 in Algorithm 1) and just performing the quantization parameter computation step (line 4 in Algorithm 1). In doing so, we can greatly reduce the processing time and memory costs required for the quantization (see Table 12), yet the proposed method still exhibits better performance than conventional approaches (see Table 4). **5. Comparison with PTQ methods for diffusion (Question 5)** - Thank you for pointing out interesting references. In contrast to our approach that focuses on mitigating the computational overhead of the conventional BRECQ, all the recommended methods have tried to incorporate diffusion-specific quantization strategies into BRECQ to overcome output distribution discrepancies over different time steps (e.g., grouping of time-steps with similar distributions [1], temporal feature preservation [2], separate quantization for shortcuts in U-Net [4], and time step-aware mixed-precision [5]). In the final version, we will cite the recommended works and discuss the differences. - We recall that we have not assumed any specific network architecture and just used the definition of the attention operation when developing our quantization objectives (see Section 3). Thus, we believe that the proposed method can be integrated with those diffusion-specific strategies without any modification, thereby accelerating the quantization processing time of diffusion models. Because the quantization of diffusion models is not our area of expertise, we leave this integration as a future work and mention it in the conclusion and limitation section. --- Rebuttal Comment 1.1: Comment: The authors addressed most of my questions regarding the novelty, the results and the comparison with other works. I will raise my score to 5. --- Reply to Comment 1.1.1: Comment: Dear Reviewer vuzn, We are very glad to hear that your questions have been addressed! We deeply appreciate your time and efforts you have dedicated to reviewing our paper. Best regards, Authors of Paper 3318
null
null
null
null
null
null
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars
Accept (poster)
Summary: The paper proposes EASE, an exemplar selection algorithm for in-context learning of large language models (LLMs). It takes into account the order of the exemplars and offers the possibility to jointly optimize the instruction and the exemplars. A neural network is iteratively trained on the embeddings of the current exemplar sequence to predict the score of the next exemplar. The search space is reduced using a technique based on optimal transport. The paper also empirically shows that the effect of exemplar selection is not the same for all tasks. Strengths: - A wide range of baseline methods and tasks are covered in the experiments. - The paper provides insights on the impact of exemplar selection on some specific tasks. Weaknesses: - The performance of EASE, which requires training a model for each task, on Instruction Induction (II) is very close to the random search baseline for most tasks. - Experiments are mainly conducted on gpt-3.5-turbo-1106, an API that might include unknown pre- and post-processing. Moreover, the availability and behaviour of these APIs is not guaranteed hindering reproducibility. Consequently, the algorithm should also be validated on open-weight models (not just when doing progressive finetuning) in addition to models only available through APIs. Minor issues: Concerning the presentation, although the main contribution is clearly stated, the paper is structured in a way that makes it a bit hard to read. For example, the main paper lacks a dedicated related work section that would better contextualize the proposed algorithm. Technical Quality: 2 Clarity: 2 Questions for Authors: - “out-of-distribution” tasks is defined as as tasks on which the LLM is not already well trained. How are these tasks selected without access to the training data? - what is the effect of the length of exemplars k on performance (using a range instead of a fixed value as in D.5)? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: There is a section on limitations in the main paper and a section on broader social impact in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. We are glad to know that our paper has provided insights into the impact of exemplar selection through a wide range of baseline comparisons. We would like to address the specific concerns and questions raised by the reviewer below: > [W1] Performance on II is very close to random search baseline You are right that the random baseline performs well in Tab. 1 of Instruction Induction (II) tasks. It is not unexpected that random is a strong baseline: In particular, we highlight in second para. of Sec. 4.1 that such tasks might not be suitable for evaluation of the exemplar selection's effectiveness. This is likely **due to data contamination** where the benchmark data have been extensively used to train the target model (i.e., GPT-3.5 in our paper). Therefore, the target model easily identifies the task through exemplars (including those selected by the random baseline) and utilizes its expert knowledge on the task to answer. This explains why the choice of in-context exemplars has minimal impact on the performance in contaminated tasks. To better evaluate the effectiveness of exemplar selection, we propose three classes of tasks in Sec. 4.3 and see the significant advantages of EASE in them. We attribute the performance gain to the fact that EASE balances exploration and exploitation to be more effective under budget constraints. The above observation aligns with our insights from progressive finetuning (Sec. 4.2): As the model knows more about the task, the effect of in-context exemplars diminishes. > [W2] Validate on open-weight models for reproducibility As suggested by the reviewer, we conduct additional experiments with open-weight models. Due to the limited time during rebuttal, we use a representative **(Meta's newest) model Llama-3.1-8B-Instruct** as the target model. We present the results in Tab. A5 of rebuttal PDF. Importantly, the results are **consistent with the original conclusions** we drew for the black-box models in the paper. The model is also much smaller than GPT-3.5, hence making it easier for others to deploy and reproduce the results. We hope this further improves the reproducibility of our results. That said, we still advise practitioners to use the black-box models for comparison, which can be more affordable especially in academic settings (pay-as-you-go APIs vs. hosting a model on servers). We note that the black-box model is also fair for comparison because all baseline methods essentially undergo the same pre- and post-processing. > [Q1] “Out-of-distribution” tasks selection We agree with the reviewer that we cannot confirm whether a task is already well-trained without access to the training data. We clarify that "out-of-distribution" tasks are **defined loosely here** (hence in **inverted commas** as mentioned in Lines 276-277 of our paper). If this is not a suitable name, we will gladly adopt another name that the reviewer deems fit; let us know! We select these tasks with the following practical intuitions: (1) If a task contains novel rules such that the model has to extract the underlying relationships among the provided in-context exemplars and directly use the relationship for test-time inferences, it is unlikely that these tasks appear in the training dataset (Rule-based tasks in Lines 281-303). (2) If a task is constructed through random label remapping to be against the model's existing knowledge (e.g., "good" sentiments are mapped to "bad" sentiments), it is unlikely such tasks appear in the training dataset (Remapped label tasks in Lines 304-314). (3) If excessive noises are added to existing real datasets, it is likely to present a distributional shift from the original real distribution to one that the model has not seen before (Noisy tasks in Lines 315-322). To provide more empirical evidence, we refer the reviewer to Tab. A6 & A7 of rebuttal PDF. We performed random exemplar in-context prompting and discovered that it achieved an average performance of 17.6\% for "out-of-distribution" tasks, which contrasts with the average performance of 64.7\% for II benchmark tasks. This demonstrates that the model is **likely to have less knowledge** about our defined "out-of-distribution" tasks. Additionally, we can also see that the performance gain of EASE is higher for "out-of-distribution" tasks. > [Q2] Using a range instead of a fixed value of $k$ as in D.5 If we understand correctly, you refer to the setting of allowing a range instead of a fixed value of $k$ to be selected. That is, if $k=50$, we allow the number of exemplars in the prompt to be any integer from 1 to 50. We present the additional results in Tab. A6 of rebuttal PDF. Firstly, EASE continues to consistently outperform the strongest baseline Best-of-N. Secondly, we also observe that the prompts with the **best performance typically have a large number of exemplars**, i.e., close to the max $k$ allowed. Thirdly, using $k=50$ gives better performance than $k=5$. Therefore, including more exemplars in the prompt usually gives a higher performance. However, this comes at the expense of a higher query cost at test time because more tokens are used in the prompt. We thank the reviewer for pointing us to this interesting insight which we will add to the revised paper. > Minor issues: Related works and presentation We thank the reviewer for pointing out ways to improve the presentation of the paper. With the additional page upon acceptance, we can certainly move the dedicated related work section (now in App. A) back to the main paper to ensure a better flow. We also welcome other ways to improve the structure of presentation in our paper. We hope our clarifications above have helped to improve your opinion of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your answer to the questions. The method was validated using the open weight model Llama-3.1-8B-Instruct which improves the reproducibility of the paper. I disagree with the stated advantages of closed APIs. There are also cloud providers that offer inexpensive pay-as-you-go access to open weight models. The authors also run experiments on the effect of k. The new results helped in answering some of the questions and improved the quality of the paper. However, I have some remarks regarding the performance on the “Out-of-distribution” tasks which is one of the main contributions of the paper (since the method is as good as the baselines on regular tasks even though it introduces overhead). The proposed approach works better on some specific tasks labeled as “Out-of-distribution”. The task selection mainly relies on practical intuitions. Is it possible that the baselines perform worse on these tasks because of some other factors unrelated to model knowledge? This could be tested if we had access to the training data (with models like Olmo, Pythia) or by using methods that more rigorously asses the model's knowledge. --- Reply to Comment 1.1.1: Comment: We are happy that our additional experiments helped to address the questions! We also thank the reviewer for suggesting the use of OLMo and Pythia to verify our intuitions about model knowledge. Upon checking the published data sources, named Dolma, of OLMo, it is likely that Instruction Induction (II) data has been included in the training source (via Common Crawl, or The Stack). Therefore, we perform additional experiments across different checkpoints (at 41k, 130k and 410k training steps) of the recent OLMo-7B-0424-hf model, which released checkpoints over more than 400k steps of training. The results are presented below. | | OLMo_41k | OLMo_41k | OLMo_130k | OLMo_130k | OLMo_410k | OLMo_410k | |:----------------------|:--------------------:|:----------------:|:--------------------:|:----------------:|:--------------------:|:----------------:| | | **Best-of-N** | **EASE** | **Best-of-N** | **EASE** | **Best-of-N** | **EASE** | | object_counting | 20.0 ± 2.9 | 28.3 ± 1.7 | 25.0 ± 2.9 | 38.3 ± 1.7 | 45.0 ± 2.9 | 46.7 ± 1.7 | | sentence_similarity | 25.0 ± 0.0 | 26.7 ± 1.7 | 30.0 ± 2.9 | 31.7 ± 1.7 | 41.7 ± 1.7 | 41.7 ± 1.7 | | orthography_starts_with | 21.7 ± 1.7 | 23.3 ± 1.7 | 21.7 ± 1.7 | 28.3 ± 1.7 | 26.7 ± 1.7 | 31.7 ± 1.7 | | translation_en-fr | 21.7 ± 1.7 | 23.3 ± 1.7 | 38.3 ± 1.7 | 45.0 ± 2.9 | 35.0 ± 0.0 | 40.0 ± 0.0 | The conclusions are consistent with Figure 1 of the main paper. - When the training just started (i.e., at 41k steps), the model might not be capable enough to carry out effective in-context learning. - As the training progresses (i.e., at 130k steps), we observe the best exemplar selection effectiveness. At this point, the model is capable of learning underlying relationships among the in-context exemplars, and yet to be well-trained on the specific task. - As the model converges (i.e., at 410k steps), the gain from exemplar selection using our EASE diminishes as the model becomes well-trained on the dataset of the respective tasks. We also tried the rule-based tasks and remapped label tasks on OLMo-7B-0424-hf. However, the in-context learning performances are always at 0\% for these more difficult tasks, so comparisons are not meaningful. We also look forward to similar efforts (as OLMo and Pythia) in the community to open-source larger and more capable models with checkpoints in the future. We thank the reviewer again for the insightful comments that helped improve our paper.
Summary: The paper introduces EASE (Efficient ordering-aware Automated Selection of Exemplars), a new approach to boost in-context learning (ICL) in large language models (LLMs). EASE optimizes the selection and ordering of input-label exemplars without needing model fine-tuning or test-time retrieval. EASE trains a neural network to predict the prompt performance using hidden embeddings from a pre-trained language model. Using the network as the scorer, it then employs a bandit algorithm to search for the best order exemplars for improved performance efficiently. Additionally, EASE can jointly optimize exemplars and instructions. In Instruction Induction (II) benchmark tasks and four novel out-of-distribution tasks, EASE outperforms several basic baselines, especially when the LLM has limited knowledge of the task. Strengths: - The problem addressed is significant, as finding a fixed set of examples and instructions that generalize well to testing time could be a valuable technique. - The proposed method is well-founded, incorporating a novel component that uses optimal transport to reduce the search space. - The performance improvement in the out-of-distribution (OOD) setting is clear. Weaknesses: - The main framework is quite similar to prior work [1], with only minor differences in details. - The literature review is inadequate, failing to cover other prompt optimization approaches, including instructions and/or example ordering (see below). Including a comprehensive related work section is crucial to highlight the novelty of the approach and justify its preference over others. - Due to the lack of discussion on prominent related works, it is unclear why the authors did not include them in the baseline set. The current baseline set is weak, relying only on heuristic methods. It would be better to incorporate proper optimization-based methods using RL[2,3], GA [4], or even LLM [5,6] as optimizers. - The current presentation of the results is unconvincing. For instance, there is no comparison in terms of running time or efficiency, areas where the paper claims to excel. The second-best baseline, Best-of-N, shows competitive results in real benchmarks. If N increases, its performance might surpass EASE. Without efficiency comparison between methods, it is hard to determine if EASE is useful. - Another major concern is the method seems to only work well with synthetic OOD settings. More experiments with real datasets are preferred. [1] Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, and Bryan Kian Hsiang Low. Use your INSTINCT: Instruction optimization using neural bandits coupled with transformers. In NeurIPS Workshop on Instruction Tuning and Instruction Following, 2023. [2] Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. Rlprompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3369–3391. [3] Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E Gonzalez. 2022. Tempera: Test-time prompt editing via reinforcement learning. In The Eleventh International Conference on Learning Representations [4] Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. 2023. Grips: Gradient-free, editbased instruction search for prompting large language models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3827–3846. [5] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations. [6] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint arXiv:2309.03409. Technical Quality: 2 Clarity: 2 Questions for Authors: - What is the size of the dataset D? If D is small, a brute-force search could be feasible. Typically, LLMs are most beneficial when there is little to no data for downstream tasks, so a small D would be more realistic. - How was the NN’s uncertainty computed? - What method was used to tune the hyperparameters of NeuralUCB? - Please consider more real data such as those in the Tempera paper [3]. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors could also mention the cost of hyperparameter tuning Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and acknowledging that we address a significant problem with a well-founded and novel method. We would like to address your specific questions below: >[W1] Similar to [1] In EASE, we face the challenge of an exploding permutational search space of exemplars that is distinct from [1]. A direct application of NeuralUCB would not work in practice for exemplar selection without our search space reduction technique through OT. It is also new to formulate the problem as one of optimization in the space of exemplar sequence embedding, which entirely removes the reliance on a white-box LLM as in [1]. The new formulation also enables the natural incorporation of instructions and hence the joint optimization of instructions and exemplars. Thus, the **distinct solutions of EASE come from the unique challenges in exemplar selection**. We also refer to the explanation of our distinction from [1] in Lines 159-165. > [W2] The literature review is inadequate To clarify, we have a more extensive related works section in App. A, which expands on the ones discussed in Introduction. We will add references to RLPrompt, TEMPERA & GrIPS in our revised paper. We will have an additional page to include related works in the main paper upon acceptance. > [W3] Other related works We thank the reviewer for the references. As long as the instruction optimization method can handle permutational search space of exemplars, they can serve as a baseline. While they are valuable references to add in our related works (we will do so for the revised paper), it is **non-trivial to adapt these works to our problem setting**, as discussed below. LLMs: APE [5] and OPRO [6] rely on LLM's generation for the instruction search space and hence are not designed for exemplar selection. It is non-trivial to adapt them for exemplar selection. Nevertheless, we **did a workaround** and adopted the same spirit of **asking the LLMs to directly select exemplars** for us by explicitly telling LLM the search space. In App. D.9, we demonstrated that EASE performs much better than asking LLMs to select exemplars. RL: The discrete optimization of a fixed-length prompt (i.e., no exemplars) in RLPrompt cannot be easily extended to our setting of exemplar optimization. TEMPERA focuses on query-dependent (vs. our query-agnostic) prompts which we argue to carry privacy risks of data exposure in Introduction paragraph 2. The formulation of TEMPERA also only supports classification tasks. So, we cannot adapt these methods to compare with EASE. GA: GrIPS only focuses on iteratively editing the instruction and does not optimize for exemplars. Specifically, it either uses a fixed set of exemplars or performs a simple heuristic random search (Sec. 5.2 & 5.8 of GrIPS). In our paper, we have in fact demonstrated the superior performance of EASE against random search in Best-of-N. > [W4] Efficiency To clarify, EASE excels in **query efficiency instead of time efficiency** (Lines 85-90). While EASE can take longer to perform the optimization, the saving in the LLM API calls or black-box LLM queries can be substantial. So, we perform a fair comparison with the baselines in all tables using a fixed target LLM call/query budget. To address the question of the performance for a large $N$, we perform additional experiments with $N$=500. The results are shown in Tab. A1 of rebuttal PDF. Compared to the original Tab. 2 of the paper, we see that the performance gain of EASE is still significant even though all baselines (Evo, Best-of-N, and EASE) improve with a larger budget $N$. > [W5][Q4] More real datasets We highlight that EASE also works better than baselines in real datasets: In Tab. 1, EASE achieves top performance for 17 tasks while Best-of-N tops only 11. To make the case more convincing, we conducted experiments on a number of additional real benchmarks. We present results on the benchmarks used in the TEMPERA paper in Tab. A4 of rebuttal PDF, including **MR, Yelp P., CR, MNLI & MRPC**. Note that these tasks are overly simple to distinguish the effectiveness of EASE because all of them achieve above 80\% accuracy and are hard to improve further using mere in-context examples. So, we refer the reviewer to **more complicated benchmarks on real tasks with reasoning chains** in Tab. A2 of rebuttal PDF, which we just experimented with during the rebuttal period. We hope that the good performance of EASE on **MATH, GSM8K & AQuA-RAT with CoT reasoning chains** makes the baseline comparisons in our paper more convincing. #### Questions > [Q1] Brute-force search We note that the search space grows exponentially with the size of $D$. We use $|D| = 100$ in most experiments. However, even when $|D| = 10$, a brute-force approach is still not feasible because permuting 5 exemplars with 10 candidates yields $10^5$ possibilities. It can incur a considerable monetary cost to query the black-box API and obtain the best exemplar sequence. > [Q2] Uncertainty for NN Due to length restrictions, we will answer this separately in an official comment below. > [Q3] NeuralUCB hypers In order to show the **robustness and generalizability to all possible tasks**, the same set of hyperparameters should perform well on all tasks. So, we fix the architecture of the NN, learning rate, weight decay, and training iterations and do not tune them. We also used the extent of exploration $\nu=0.01$ throughout all experiments after searching from four values $[0, 0.01, 0.1, 1]$. We hope our clarifications above have helped to improve your opinion of our work. --- Rebuttal Comment 1.1: Comment: Dear Reviewer NbA9, Thank you for taking the time to review our paper! As the discussion period is concluding in less than a day, we hope to hear from you whether our rebuttal has sufficiently addressed your questions and concerns. Especially, we hope that the clarifications on our distinctions from [1] and related works are satisfactory. Additionally, we hope that the additional experiments we conducted on real datasets (MR, Yelp P., CR, MNLI & MRPC) and datasets with CoT reasoning chains (MATH, GSM8K & AQuA-RAT) have made our papers more convincing. We are more than happy to answer any further questions during the remaining discussion period. Best regards, Authors of Paper 16009
Summary: The paper introduces EASE, a method for optimizing ICL in LLMs by selecting and ordering exemplars efficiently. Unlike retrieval-based methods that incur extra computation and privacy risks, EASE uses a neural bandit algorithm and optimal transport techniques to find high-quality ordered exemplars without test-time computation. It also extends to jointly optimize exemplars and instructions for improved performance. Empirical evaluations demonstrate EASE's superiority over existing methods across various tasks, especially where the LLM lacks task-specific knowledge. EASE proves robust in noisy data scenarios, offering a comprehensive solution for enhancing ICL in practical applications. Strengths: - The introduction of the EASE algorithm, which combines neural bandit optimization and optimal transport, is a unique approach to exemplar selection. - The paper includes extensive empirical evaluations, comparing EASE with a comprehensive suite of baseline methods across various tasks. The consistent outperformance of EASE underscores its effectiveness. - The paper provides clear explanations of the methodology, including the use of neural bandit algorithms and optimal transport, making the complex concepts accessible. Weaknesses: - The requirement for on-the-fly computation of embeddings for ordered exemplar sequences is identified as a potential computational bottleneck. - EASE relies on the availability of a suitable validation set, which may not always be readily available in certain scenarios. This requirement could limit the method's applicability in some contexts. - The joint optimization of exemplars and instructions, while beneficial, adds complexity to the method. The paper could provide more detailed guidance on effectively implementing this joint optimization in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can you provide more insights or potential solutions to mitigate the computational bottleneck caused by the on-the-fly computation of embeddings for ordered exemplar sequences? - While combining EASE with retrieval-based methods for large exemplar sets has shown better performance, could you elaborate on any potential limitations or challenges of this approach? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The need for a suitable validation set is recognized as a limitation, as such sets may not always be readily available. This could restrict the method's applicability in certain contexts where validation data is scarce or difficult to obtain. - The selection of exemplars could inadvertently reinforce biases present in the training data. The authors could address how EASE mitigates bias and ensures fairness in exemplar selection, especially in sensitive applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper. We are glad to hear that our approach is unique, the empirical evaluations are extensive on comprehensive baselines, and our explanations are clear. For the specific concerns and questions raised, we will address them below: > [W1] On-the-fly computation of embeddings as a potential computational bottleneck > [Q1] Insights or potential solutions to mitigate the computational bottleneck In addition to the optimal transport (OT) introduced in EASE to reduce the on-the-fly computation of embeddings required, we can potentially eliminate the on-the-fly embedding entirely by **adopting the average embedding (AvgEmb) of individual exemplars**. However, there is no free lunch: Such computational simplification comes at the cost of performance due to the **loss of order information**. We performed ablation studies in App. D.6 to demonstrate that simply averaging the embeddings of all exemplars using AvgEmb results in worse performance. Though not as good, note that AvgEmb still achieves a decent and competitive performance. The practitioners can balance the trade-off and select the most suitable method. While the above approaches either use pre-trained models for on-the-fly sequence embedding or discard order information altogether, there is a potential alternative: One can try **concatenating the individual exemplar embedding** as the NN input such that the order information is captured in the concatenated embedding. However, this method may require more training data to fit the NN due to a more complex mapping (from the higher dimensional input space). Also, this method **may not scale** well when selecting a large number $k$ of exemplars. > [W2] EASE's reliance on the availability of a suitable validation set as a limitation > [L1] Especially in contexts where validation data is scarce or difficult to obtain We clarify using a validation set is a common practice as in [4, 43, 14]. The validation dataset can be as small as 20 data samples to provide meaningful evaluations of the prompts to guide the optimization process. We used 20 validation samples for all experiments. The manual creation of a small validation set should not be very difficult or expensive. Moreover, in situations where a suitable validation set is simply not possible to obtain, it is **possible to revert to human feedback/responses**, thanks to the recent work of Lin et al. (2024). We can replace the numerical validation score with binary human feedback (i.e., asking a human to select a preferred prompt according to the responses/outputs of two prompt candidates). This greatly relaxes the requirement of a labeled validation set and only requires preference feedback from an interactive human user. > [W3] More detailed guidance on effectively implementing this joint optimization in practice. Sure! We will elaborate more details of the joint optimization in the revised paper. We will provide some rough explanations here: According to Line 5 of Algorithm 1, the instructions augment the search space by the size $|P|$. In practice, the most straightforward implementation without increasing the computational complexity is to reduce $|Q'_t|$ from $q'$ to $q'/|P|$. More generally, one can randomly sample $r\times |P|$ instructions to pair with $q'/(r\times |P|)$ exemplar sequences. This implementation is simple yet effective, where $r$ controls the trade-off of focusing more on instructions vs. exemplars. > [Q2] While combining EASE with retrieval-based methods for large exemplar sets has shown better performance, could you elaborate on any potential limitations or challenges of this approach? We thank the reviewer for recognizing our effort in using retrieval-based methods to tackle large exemplar sets. We would like to elaborate on some limitations: (1) The filtering through retrieval-based methods completely eliminates the consideration of a large subset of exemplars in the later optimization stage. This may result in important exemplars being **left out and never explored again** in our automatic optimization. (2) The cosine similarity retrieval places **a strong bias** on preferring exemplars that are similar (in the embedding space) to the validation set, which may not yield the best performance in practice. This bias is dependent on the retrieval model and the retrieval metric used which therefore **need to be carefully selected**. These limitations come with the **simplification of the search space for practical efficiency reasons**. We will add a section to discuss the above limitations and challenges for future research and improvements. > [L2] The selection of exemplars could inadvertently reinforce biases present in the training data. The authors could address how EASE mitigates bias and ensures fairness in exemplar selection, especially in sensitive applications. Fairness is an important aspect to consider as the reviewer pointed out. We would like to refer the reviewer to App. B where we emphasize the responsible deployment of our method, including integrating safety measures and ethical constraints in the objective metric. In sensitive applications, we suggest **integrating the performance objective metric with fairness** such as demographic parity, equal opportunity, equalized odds, etc. to mitigate bias in exemplar selection. The integration can be done with an adjustable hyperparameter controlling the extent of fairness to be enforced, which depends on the critical and sensitive nature of the application. We hope our clarifications above have helped to improve your opinion of our work. **References** (Lin et al., 2024) Prompt Optimization with Human Feedback. arXiv. --- Rebuttal Comment 1.1: Comment: I appreciate the comprehensive rebuttal provided by the authors, which addresses the concerns and questions raised in my initial review. I will raise my rating from 5 to 6, reflecting a higher confidence in the paper's contribution and impact. --- Rebuttal 2: Comment: Dear Reviewer FzJo, Thank you for taking the time to review our paper! As the discussion period is concluding in less than a day, we hope to hear from you whether our rebuttal has sufficiently addressed your questions and concerns. We have provided additional insights and proposed potential solutions to the computational bottleneck. We also justified the use of our validation dataset, while offering an alternative approach to remove the validation dataset in future works. We also hope that our detailed guidance on joint optimization, along with our discussion of the limitations helped to improve the quality of the paper. We are more than happy to answer any further questions during the remaining discussion period. Best regards, Authors of Paper 16009
Summary: The authors propose EASE, a method for optimizing the selection of few-shot examples for prompting black-box LLMs. EASE is an iterative algorithm that combines NeuralUCB and Optimal Transport. In iteration t, EASE trains a neural network to map embeddings of strings (of few-shot examples) to their average score on a validation set. Then, they sample a subset of possible combinations of few-shot examples, filter them with optimal transport, and use NeuralUCB to select a set that maximizes their acquisition function. The selected set is evaluated and added to the pool of embedding -> score list. The authors evaluate on a set of existing tasks and a few synthetic tasks they created. Overall, the proposed method outperforms the baselines nearly across the board. Strengths: 1) The paper overall is easy to follow. 2) The results are very strong and the range of baselines is satisfactory. 3) The hypothesis and synthetic task evaluation is quite valuable and insightful. 4) The method underlying EASE is likely to spur future approaches in this space. Weaknesses: 1) The method does not really justify why we should expect off-the-shelf embeddings (into a single unlearned vector no less) of few-shot example strings to encode enough information for NeuralUCB to work effectively. In general, off-the-shelf encoders may or may not highlight the important aspects of the prompt; they're not trained to do that. The authors discuss that embeddings will be sensitive to the ordering. This is true, but in what way do we know that such sensitivity could be leveraged effectively from a frozen task-misaligned sematic encoder? 2) A lot of the discussions regarding "retrieval-based methods" are just forced and the paper doesn't even need them. The work shows that it's empirically more effective. That's sufficient. You don't need to argue that retrieval-based methods will "significant additional computational overheads especially when the number of test queries is large". That's plainly false: retrieval in the way discussed in this paper (e.g., cosine similarity) can standardly be done from billion-scale collections in 2-3 milliseconds in practice, far faster than the LLM invocation. 3) Many experimental details don't seem to have been controlled for: what's the effect of the LM choice? what's the effect of the precise budget? of the training set size? of the validation set size? How large are these currently in the first place? 4) The tasks selected are arguably overly simplistic and generally this means that the few-shot examples considered are all rather trivial. To my understanding, there are no reasoning chains or open-ended generation in any of the labels. There's no argument on why we should expect a method that works for simple direct classification-type labels to generalize to far more open-ended tasks. Such tasks exist plentifully; it's unclear why the authors do not consider any of them. 5) The authors say: "Our proposed Evo and Best-of-N are the most competitive baselines with best performance in 9 and 11 tasks, respectively." This seems to suggest that the authors believe they are the first to consider such approaches. While I think the current baselines are satisfactory, both Evo and Best-of-N approaches are components in numerous existing methods (which may involve other components), e.g. see PhaseEvo "Unified In-Context Prompt Optimization" for the former and DSPy's BootstrapFewShot with RandomSearch for the latter. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) The authors simultaneously find best-of-N to be highly effective as a baseline and yet argue for the importance of Optimal Transport in their method. Can we gain more insight on what OT ultimately selects under the hood? Intuition is lacking. 2) What's the difference between Tables 7 and 8? What about ablations on real tasks? 3) The authors say: > filter the large pool of data exemplars to eliminate those that are less relevant to the task. To this end, we propose to first use retrieval-based methods to select the exemplars that are more relevant to the task, and then run our EASE using this refined smaller set of exemplars. Specifically, we use a cosine similarity retriever and perform exemplar selection on D with a size n as large as 100000. What does this mean? What are examples that are "more relevant" to a task? What's the precise retrieval formulation here, i.e. what's the query and what is the corpus of documents. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to review our paper and acknowledging the strong results, valuable insights, and possibility of future extension. We will address your concerns below: > [W1] About "off-the-shelf embeddings" We justify in the paper that off-the-shelf encoders are "commonly used for downstream clustering, semantic search, and classification". We regard **prompt optimization as a similar downstream application** that can leverage the advantages of such pre-trained encoders. As for order sensitivity, we have a dedicated section that discusses the benefit of the ordering-aware embedding through ablation studies in App. D.6. Therefore, the sensitivity to ordering is leveraged effectively in EASE to produce better performance as compared to the order-agnostic AvgEmb. However, we do observe differences for different encodings: see Tab. 16 in App. D.8. According to the reviewer's comments, it is worth investigating as a future direction to develop or finetune embedding models specifically for the purpose of prompt optimization, such that it captures important aspects of the prompt in the latent space. > [W2] Retrieval is fast We thank the reviewer for pointing out the comparatively negligible computational overhead for retrieval. **Considering the fast retrieval in practice, we would instead comment that test time computation is not needed for our method**. We also wish to highlight that the decreased privacy risks and the enhanced empirical effectiveness continue to be the advantages of our method. > [W3] Experimental details We refer the reviewer to **App. C for all implementation details**. We reproduce them here for ease of reference. LM choice: Default is gpt-3.5-turbo-1106. In App. D.7, we also show that EASE is generally useful for different black-box models, including GPT-4-V, GPT-4-Turbo, and Gemini Pro. Budget size: Default is 165 rounds of validation evaluation (same as [14]). We conduct additional experiments by increasing the budget to 500 and show the results in Tab. A1 of rebuttal PDF. Increasing the budget improves performance. Training set size: Default is 100 data points. In App. D.5, we increase to $n$=1000 and show that EASE is able to select a large number of exemplars from a larger training data pool. Validation data: Default is 20 validation data exemplars. Using a larger size may reduce performance variance and reduce "overfitting" (i.e., finding exemplars and instructions that work well only on the validation set), while having a higher cost. > [W4] Reasoning chains or open-ended generation Reasoning chains: As suggested by the reviewer, we conduct **additional experiments for tasks with reasoning chains, including MATH (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021), and AQuA-RAT (Ling et al., 2017)**. From Tab. A2 of rebuttal PDF, EASE works well for these tasks. Open-ended generation: Tasks like auto_categorization, word_sorting, LR, and LP-variant are beyond simple direct classification-type labels. For example, auto_categorization requires outputting an open-ended sentence that categorizes the inputs well; LP-variant does open-ended sentence translation following a set of nontrivial rules. Therefore, these tasks demonstrate the generalizability of EASE on various tasks of different levels of difficulty and response types. > [W5] Reference for Evo and Best-of-N We thank the reviewer for suggesting references for Evo and Best-of-N. We will **add connections to PhaseEvo for our Evo baseline, and connections to RandomSearch implemented in DSPy for our Best-of-N baseline**. These references will serve as **stronger support** for the validity of the baselines that we compare with, and hence demonstrate the effectiveness of EASE. ### Questions > [Q1] Insight on OT Intuitively, OT selects exemplar candidates such that they are on average close to the embeddings of validation exemplars in $D_V$, as measured by cosine similarity. This intuition is derived from the specific definitions of the embedding space, the cost function, and the discrete measure in Lines 179-188. The advantages of OT are two-fold: (1) OT allows **efficient filtering of less useful exemplars** due to its computationally efficiency and (2) OT operates entirely on exemplar samples and their embeddings **without needing to query the target LLM**. If the reviewer is referring to using Best-of-N in place of OT in our EASE, it will incur much more API calls to the target LLM. Alternatively, completely removing OT also degrades performance, as suggested in the ablation study in Tab. 7 (NeuralUCB vs. EASE). > [Q2] Tables 7 and 8 Tab. 7 presents the validation performance and Tab. 8 presents the test performance. We add experiments for real tasks that involve open-ended sentence answers and present the results in Tab. A3 of rebuttal PDF. The results are consistent with the original paper in that using both NeuralUCB and OT together contribute to the success of EASE. > [Q3] Retrieval formulation Query: Validation data exemplars $D_V$ Corpus: Training set exemplars in $D$ Relevance: An embedding model with a cosine-similarity metric Procedure: For each validation data exemplar in $D_V$, we retrieve the most similar/relevant training set exemplars from $D$. Combine them to form a smaller set of exemplars (than $D$). Therefore, we have retrieved a small subset of exemplars from $D$ that are similar (i.e., relevant) to the validation set. Then, we proceed with EASE using this reduced subset for efficiency. We will make this clearer in the revised paper. We hope our clarifications above have helped to improve your opinion of our work. **References** (Hendrycks et al., 2021) Measuring mathematical problem solving with the math dataset. NeurIPS. (Cobbe et al., 2021) Training verifiers to solve math word problems. arXiv. (Ling et al., 2017) Program induction by rationale generation: Learning to solve and explain algebraic word problems. ACL. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The retrieval formulation is a bit strange. Aren't the training and validation sets, both, from the same distribution? What's the intuition that justifies using each validation example to retrieve a training example, besides possibly overfitting to the validation set? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt reply! The retrieval method is mainly implemented as an **independent step to pre-filter candidates** serving as a promising extension of our EASE to handle a large set of exemplars. The intuition is to prefer exemplars that are similar to the validation set in order to achieve high performance. At the same time, it **encourages diversity** by selecting a subset that contains data from possible different subgroups represented by different validation data exemplars. Specifically, for each validation data exemplar, we retrieve the most relevant training data, so the combined dataset contains data from different data subgroups. We usually assume that the validation set and the test set follow the same distribution in the learning setting. By minimizing the validation error, we also minimize the test error. This in fact follows the common practice in classical machine learning, where we typically use the validation set to select the best-performing models or hyper-parameters. We hope that our clarification helps!
Rebuttal 1: Rebuttal: # Global response We sincerely appreciate the efforts of all our dedicated reviewers. The constructive feedback in the reviews significantly enhanced the quality of our paper. We are very grateful! In response to the specific questions raised by the reviewers and to address potential weaknesses, we have provided a detailed reply to each of the reviewers. In these responses, we use short forms such as **"[Q1]"** to denote the first Question, **"[W1]"** to refer to the first Weakness, and **"[L1]"** to refer to the first limitation. In this global response to all reviewers, we have attached a PDF file with tables containing additional results to support our paper. Please refer to the individual responses for detailed explanations, and we provide a summary here: - [*Reviewer u1fC* and *Reviewer NbA9*] We increased the query budget to 500 iterations to study the effect of the query budget on performance. - [*Reviewer u1fC* and *Reviewer NbA9*] We showed the effectiveness of EASE on real-world open-ended generation tasks with reasoning chains. - [*Reviewer u1fC*] We extended the ablation studies on the necessity of both OT and NeuralUCB done in App. D.3 to additional open-ended real tasks. - [*Reviewer NbA9*] We performed experiments on additional benchmarks used in the TEMPERA paper. - [*Reviewer 1tAm*] We provided additional results on Meta's newest open-weight model Llama-3.1-8B-Instruct to ensure better reproducibility. - [*Reviewer 1tAm*] We adopted the new experimental setting suggested by *Reviewer 1tAm* to allow different number of exemplars to be selected. The experiments demonstrated the superiority of EASE and showed insights into the number of exemplars to be included in the prompt. This work advances research on automatic prompt optimization in two ways: (1) As highlighted by *Reviewer FzJo* and *Reviewer NbA9*, our "EASE algorithm, which combines neural bandit optimization and optimal transport, is a **unique approach to exemplar selection**". This ensures not only query efficiency through neural bandits, but also computational efficiency by "incorporating a **novel component** that uses optimal transport to reduce the search space"; (2) As pointed out by *Reviewer FzJo*, *Reviewer NbA9*, and *Reviewer 1tAm*, our EASE "**takes into account the order of the exemplars**" and "**jointly** optimizes exemplars and instructions for improved performance", which provides researchers and practitioners a **fully automated pipeline for prompt optimization**. Our work can significantly impact the field of prompt optimization by **bringing principled optimization and search methods** to **complement the existing** heuristic and retrieval-based approaches. We also hope to draw researchers' attention to the **ordered nature of exemplars** in the input sequence and the **joint effect** from the interactions between exemplars and instructions in the prompt. We share similar thoughts with *Reviewer u1fC* that "the method underlying EASE is **likely to spur future approaches** in this space". Again, we would like to extend our thanks to the reviewers for their constructive feedback and valuable insights. Best regards, Authors of Paper 16009 Pdf: /pdf/a021913be87f35c5991a6a9bc369abb0e858aa6d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Consensus Learning with Deep Sets for Essential Matrix Estimation
Accept (poster)
Summary: This work proposes a simpler yet effective network architecture based on Deep Sets for the estimation of essential matrices. The method is based on three key properties: 1. A two-stage noise-aware training scheme first uses noise-free inlier matches for training, and then uses original matches and full loss for training; 2. A noise head predicts the displacement noise in inlier matches. 3. A classification head classifies point matches as either inliers or outliers. Evaluations show that the method outperforms previous methods in terms of accuracy and generalization. Strengths: 1. The proposed network architecture is simple yet effective. 2. NAC block represents denoising and inlier/outlier classification effects, well validated through a toy example, i.e., a line-fitting task in Fig. 2. 3. The key properties proposed are shown to be effective in the ablation study. 4. The paper is well-structured and is generally easy to follow. Weaknesses: 1. While the DeepSets-based architecture is shown to be effective, traditional fully convolutional networks, such as UNet, are also applicable. Is there any strong reason why a DeepSets-based network is preferred and explored first? 2. The authors remove half of the input matches to test the effect of outlier pruning. Is this approach reasonable? Is this value adjustable? It would be better to add an ablation study showing performance changes as the number of removed matches varies. Furthermore, this is an indirect method to validate the effectiveness of the proposed inlier/outlier classification head. Why was the ablation study not conducted with and without this head? 3. While the paper provides an ablation study on the proposed key properties, it lacks an ablation of the network architecture and hyperparameters, such as the number of set layers, the dimension of the Set Encoder, etc, but this would also be valuable to know — for example, how much of an impact do these parts of the network have on the final results? 4. The paper lacks details on applying the Optimal Triangulation Method to set inlier/outlier labels and obtain ground truth, noise-free inlier keypoints. Including these details would help readers better understand the training process. 5. What is the unit of the y-axis in Fig. 3? 6. To the best of my knowledge, the ground truth poses in the SUN3D dataset are inaccurate (see supplemental material in {1}). Why did the authors use this dataset solely to follow the evaluation protocol of previous methods? For indoor scenes, is it more suitable to use ScanNet dataset {2} for evaluation as [42]? 7. While the paper claims that the network is simpler than existing methods, the multiple NAC blocks and the noise regression module may introduce a certain level of complexity. It would be better to provide experiments and discussions on the computational efficiency of the proposed method compared to existing methods, including runtime and model parameter scale, which are critical for real-world applications. {1} Bellavia, Fabio. “Image Matching by Bare Homography.” IEEE Transactions on Image Processing 33 (2023): 696-708. {2} Dai, Angela, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas A. Funkhouser and Matthias Nießner. “ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 2432-2443. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. While the DeepSets-based architecture is shown to be effective, traditional fully convolutional networks, such as UNet, are also applicable. Is there any strong reason why a DeepSets-based network is preferred and explored first? 2. The authors remove half of the input matches to test the effect of outlier pruning. Is this approach reasonable? Is this value adjustable? It would be better to add an ablation study showing performance changes as the number of removed matches varies. Furthermore, this is an indirect method to validate the effectiveness of the proposed inlier/outlier classification head. Why was the ablation study not conducted with and without this head? 3. While the paper provides an ablation study on the proposed key properties, it lacks an ablation of the network architecture and hyperparameters, such as the number of set layers, the dimension of the Set Encoder, etc, but this would also be valuable to know — for example, how much of an impact do these parts of the network have on the final results? 4. The paper lacks details on applying the Optimal Triangulation Method to set inlier/outlier labels and obtain ground truth, noise-free inlier keypoints. Including these details would help readers better understand the training process. 5. What is the unit of the y-axis in Fig. 3? 6. To the best of my knowledge, the ground truth poses in the SUN3D dataset are inaccurate (see supplemental material in {1}). Why did the authors use this dataset solely to follow the evaluation protocol of previous methods? For indoor scenes, is it more suitable to use ScanNet dataset {2} for evaluation as [42]? 7. While the paper claims that the network is simpler than existing methods, the multiple NAC blocks and the noise regression module may introduce a certain level of complexity. It would be better to provide experiments and discussions on the computational efficiency of the proposed method compared to existing methods, including runtime and model parameter scale, which are critical for real-world applications. {1} Bellavia, Fabio. “Image Matching by Bare Homography.” IEEE Transactions on Image Processing 33 (2023): 696-708. {2} Dai, Angela, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas A. Funkhouser and Matthias Nießner. “ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 2432-2443. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these comments. Q1. UNet: Our network obtains as input an unordered set of point matches of apriori unknown cardinality. Such input is naturally suitable for DeepSets and other permutation equivariant architectures. Standard convolutional and the UNet architecture cannot handle such input, since it requires input on a grid. Graph convolutional architectures can be applied (and indeed are applied in previous methods, including CLNet, NCMNet, and also U-Match, which uses a graph UNet architecture). These methods, however, are more complex and, as our experiments show, are slower (see the attached Table R.1) and yield inferior accuracies (see Tables 1-2 in the paper). Q2. Pruning amount: For our ablation, we followed previous approaches, including CLNet, NCMNet, and BCLNet, which pruned half of the input matches. MGNet further performed an ablation study with different pruning ratios and showed that pruning reduces the accuracy of the predictions. We should emphasize that our confidence scores (Eq. 2 in the paper) perform “soft pruning,” which appears to be more effective than “hard pruning.” Ablating the classification head: We evaluate the performance of our model without the classification head. Due to the prevalence of outliers, performance deteriorates to chance levels (0.014% and 0.025% mAP5 score for in-scene and cross-scene, respectively). We further demonstrate the effectiveness of our classification head by providing its precision, recall, and F1 scores (Table R.3). Q3. Hyperparameters ablation studies: We include in Table R.5 an ablation of the number of NAC blocks (which affects the number of DeepSet layers) and the encoder dimension. These ablations further justify our choices. Q4. Optimal Triangulation Method: We will add these details to the paper. Q5. Y-axis units in Fig. 3: The Y-axis represents the number of image pairs. Q6. ScanNet evaluation: Indeed, we used SUN3D to follow the evaluation protocol of previous methods. For the rebuttal, we include in Table R.2 the results of cross-dataset generalization on ScanNet. We compared our results to NCMNet predictions. Our method achieves superior results on this highly challenging dataset. Since previous methods have not published their model trained on SUN3D, we trained NCMNet from scratch using the official code release. Q7. Resources usage: We address this question in Table R.1. While our model uses more parameters than NCMNet and BCLNet, which use graph attention architectures, it is 4-6 times faster than these methods and consumes less GPU memory at inference. --- Rebuttal Comment 1.1: Title: Thanks for the authors' feedback. Comment: I read the rebuttal and all other reviews. This rebuttal has addressed most of my concerns, but some questions still remain. 1. For the reply to Q6, could you please provide the results of other baseline methods on ScanNet dataset? 2. For the reply to Q7, somewhat counterintuitively, the model achieves a faster runtime despite using more parameters. --- Reply to Comment 1.1.1: Comment: Thank you for your response and questions. 1. Here are the results, with two more baseline methods: | Method | MAP5 | MAP10 | MAP20 | MAP30 | |----------------------|-------|--------|--------|--------| | U-Match | 6.53 | 13.16 | 22.84 | 30.44 | | ConvMatch w/RANSAC | 6.53 | 15.15 | 26.54 | 33.77 | | NCMNet |8.6 | 15.16 | 25.65 | 33.56 | | NACNet (Ours) | **10.33** | **17.9** | **28.55** | **36.08** | We note that other previous methods did not provide pre-trained models for SUN3D or config files with hyperparameters. 2. Indeed, our method appears to achieve a faster runtime (11.12ms) despite using more parameters. This is due to the simplicity of our network’s architecture, which utilizes 82 standard fully-connected layers. In comparison, BCLNet (runtime 46.94), interleaves k-nearest neighbors (k-nn), graph convolution, and attention layers within a total of 190 layers. Further profiling of their code shows that the k-nn (which involves no parameters) takes 16% of this time (\~7.5ms), and the convolutions take 12% (\~5.6ms). Moreover, the forward pass over 190 layers is sequential and cannot be parallelized as efficiently as shallower networks.
Summary: The paper proposes a deepsets based architecture for essential matrix estimation. The proposed architecture is required to overcome positional nosie, be capable of handling outlier matches which comprise a significant portion of the input data, generalize to unseen image pairs and be capable of handling arbitrary number of point matches. The propose archchitecture comes with the following: - a two-stage noise-aware training scheme - a noise head for predicting positional inlier nosie, and - a classification head for inlier/outlier discrimination. The architecture consists of - Nosie-aware Concensus Blocks (NAC) - These are stacks of set encoders (ie Deepsets) used to obtain latent representation of input sets and for classification of points as inliers or outliers. The ouput of these blocks are used to estimate the essential matrix. Strengths: - A simple architecture is proposed that offers better empirical performance compared to current methods. Weaknesses: - Ablation on the choice of set functions is missing. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can an ablation on the choice of set functions be provided? For instance, recent methods such as [1] and [2] have been shown to perform better than deepsets for modeling interactions over sets and these might provide further performance boost to the proposed architecture. - Also, how does the number of deepset layers affect performance? [3] shows that special techniques are required to ensure effective learning when multiple deepsets layers are stacked. Curious how this correlates with your experiments. ## References [1] Lee, Juho, et al. "Set transformer: A framework for attention-based permutation-invariant neural networks." International conference on machine learning. PMLR, 2019. [2] Bruno, Andreis, et al. "Mini-batch consistent slot set encoder for scalable set encoding." Advances in Neural Information Processing Systems 34 (2021): 21365-21374. [3] Zhang, Lily, et al. "Set norm and equivariant skip connections: Putting the deep in deep sets." International Conference on Machine Learning. PMLR, 2022. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While no discussion of limitations is provided, i do not see any immediate negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these comments. Q1.Set transformer ablation: We tested the SetTransformer in a quick synthetic experiment. We trained both the SetTransformer and our DeepSet model on noise-free data. While our model achieves a highly accurate pose estimation of 86.52% mAP5 (due to the lack of noise), the SetTransformer model achieves much less accurate results of 66.79% mAP5. We believe the SetTransformer architecture suffers from the lack of global features and a narrow bottleneck in the ISAB block. We leave improvement of the SetTransformer architecture to future work. Q2. Number of DeepSet layers: We provide further ablations in Table R.5, showing that the removal of one NAC block (two blocks instead of three) reduces accuracy by nearly 7%. Thank you also for bringing up the work of Zhang et. al [3]. Despite some architectural differences, our implementation respects the principle of keeping a “clean path” for the gradient descent (e.g., by utilizing skip connections in a similar design as in [3]), enabling the usage of multiple DeepSet layers.
Summary: In this work, authors propose NACNet (Noise Aware Consensus network) for the robust essential matrix estimation. For this purpose authors apply DeepSets based architecture that predicts inlier / outlier class as well as inlier displacement error estimation. Authors also propose a two-stage training: (1st) train with noise-free matches by disabling noise-prediction head and (2nd) train on real data with noise. Extensive experiments with in-scene, cross-scene, cross-dataset, cross-feature generalization and extensive ablation study shows that propose simpler method is superior compared to other deep-learning based consensus methods with complicated architectures. Strengths: Overall, paper is well written with sufficient mathematical rigor to prove authors claim that "Our network achieves accurate recovery that is superior to existing networks with significantly more complex architectures." (1) Predicting noise in key-points without feature-descriptor is well-motivated and proven to work through ablation study. (2) Architecture design is simple with DeepSet based 3 NAC block with Model Regression head on top. The network predicts confidence along with inlier-outlier classification (similar to [1]) that allows robust estimation of Essential Matrix via differentiable DLT. (3) Whole network is end-to-end differentiable. (4) Benchmarkin &, experiments across different datasets shows good generalization capability of the proposed method. This can have high impact in relative-pose estimation that is essential to multi-view geometry. References [1] Yi, Kwang Moo, et al. "Learning to find good correspondences." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. Weaknesses: Paper discusses predicting noise-free key-points as an output, however lacks sufficient discussion around difference between noisy and noise-free points. (1) How does keypoint noise changes feature points? It would have been nicer to see some visualization / empirical matrix of change in key-point locations. of noise-free key-points predicted. (2) Another interesting fact that is not tested: does the change in key-point position is to fit into the "Model" or is it actually becoming sub-pixel accurate as shown in [2] (3) Some more comparison when input has less "outlier" / more "outlier". Empirical evidence around effect of cardinality is also missing. References [2] Lindenberger, Philipp, et al. "Pixel-perfect structure-from-motion with featuremetric refinement." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Technical Quality: 4 Clarity: 4 Questions for Authors: L97: "Our work is first one to apply keypoint position correction without using a geometric model" -- Last loss that defines distance between noise-free keypoints and prediction uses "geometry" (as in optimal triangulation) to get noise-free points. What does it mean here when authors say "noise-free"? What is the size of the model? How fast is the inference? It would be interesting to know how close are we compared to RANSAC in-terms of runtime? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Authors discuss limitation of their approach especially not-perfect generalization to other datasets / features, adding degeneracy test block inside the model. This work doesn't have societal impact, so authors ignore that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these comments. W1.Denoising evaluation: We include a summary box plot (Figure R.1) of the noise distribution before and after our key-point denoising, measured with respect to the ground truth essential matrix. The median of the mean reprojection error (over each pair) is reduced by 0.202 pixels and even more (0.246 pixels) for image pairs with pose error (maximum between the translation and rotation errors) lower than five degrees. W2. Noise prediction: In principle, our denoising module is trained with correspondences cleaned with the ground truth essential matrix, and so it is expected to predict noise-free key-point locations independent of the predicted essential matrix. Moreover, as our ablation indicates (Table 4 in the paper), this denoising process improves the accuracy of the essential matrix regression, indicating that it yields a better fit to true key-point positions rather than to the predicted model. W3. Performance with different outlier ratios: We address this question in Table R.4. As expected, accuracy drops as the fraction of outliers increases or as the number of inliers decreases. Somewhat counter-intuitively, we obtain somewhat low accuracy with small fractions of outliers when SIFT features are used. This occurs because our test data in this segment is small (only 30 pairs). Q1. Meaning of “noise-free”: We apologize for this confusion. Existing methods use the essential matrix estimated *in inference* to correct the positions of key-points. Our network, in contrast, learns to denoise these positions independently from the estimated essential matrix, as we further explained in W2 above. Of course, the noise-free points used *in training* are obtained by applying geometry (triangulation). We will rephrase this statement Q2. Model size and Inference time: We address this question in Table R.1. While our model uses more parameters than NCMNet and BCLNet, which use graph attention architectures, it is 4-6 times faster than these methods and consumes less GPU memory at inference. We further compare our runtime to RANSAC-based methods. With 100K iterations, RANSAC is significantly slower than our method. We note that RANSAC is implemented in CPU. Kornia[1] introduced a GPU RANSAC implementation for fundamental matrix estimation. We tested their resource usage on the same data to demonstrate the potential of GPU implementations for RANSAC. Using a batch size of 10000 samples, their model runtime and maximum GPU memory usage were 40.94ms and 414.66MB, respectively, which are 4 times higher than our model. [1] Riba, Edgar, et al. "Kornia: an Open Source Differentiable Computer Vision Library for PyTorch.” Winter Conference on Applications of Computer Vision. 2020. --- Rebuttal Comment 1.1: Title: Thanks for taking time to write rebuttal. Comment: Reply W1: Thanks for providing the graph. What happens for image pairs with high pose error? (or what are some highest key-point movement you see between before and after denoising?) What I am trying to understand is what the noise-prediction block is learning? Is it learning through image gradient / feature gradients? Or is there something else going on here. Reply W2: Thanks, all clear. Reply W3: Make sense. This is a good insight, if possible I'll encourage authors to make space for it in the paper / appendix. Reply Q1: Thanks, this clarifies. Reply Q2: Amazing! I think this is very important. 11ms is highly desirable. Also, low memory usage shows that this method can also be used on smaller memory constrained devices. Overall, I am satisfied with authors response and the paper does show practical and novel algorithm for consensus learning. I will keep my rating to 8. --- Reply to Comment 1.1.1: Comment: Thank you for these comments. We will modify the paper accordingly. W1. With high pose error, we see nearly no correction at all. Perhaps we can answer your question regarding the noise prediction block with an informal discussion. Our noise prediction block is a function of a match $x_i$ and the full set of input matches $X$. This function is trained to minimize a loss that depends on $x_i$ and the ground truth essential matrix $E$, and we expect it to produce the maximal a postriori noise vector given the set $X$. Here the prior encodes the noise distribution of the feature extractor. For SIFT, this may be an isotropic Gaussian distribution. The set $X$ imposes a distribution over possible essential matrices, and those, in turn, narrow down the likely directions and magnitudes of the predicted error.
Summary: The authors propose a method to tackle the traditional computer vision problem of estimating the essential matrix between two camera views of the same scene based on a set of point matches. The method distinguishes between inlier / outlier matches, and explicitly models the displacement noise in the inlier matches using a “Noise Aware Consensus Network”. The model is trained in two stages: first, the model sees only noise-less inlier matches with outlier matches, and secondly real world blends of noisy inlier and outlier matches. The authors show that this framework is able to compute accurate essential matrices with a variety of different image descriptors over different datasets. Strengths: 1. The paper presented convincing performance as compared to the baselines, especially in the cross-scene / cross dataset settings. This attribute is perhaps most important for practical applications, and shows that the explicit noise reasoning scheme appears to work well. 2. The ablation studies shed light on the contributions of the 2-stage training and keypoint denoising steps. These will help guide future research. 3. The architecture and training details are quite clear. Together with the authors’ promise of releasing the code, this work is a meaningful contribution to the research community. 4. The paper is easy to read and navigate; different parts of the design are motivated clearly. Weaknesses: 1. It would be important to understand how the capacity of the NACNet architecture compares with the baseline methods - while it indeed appears simpler and more streamlined than pipelines such as BCNet, it would be important to understand whether the complexity has simply been wrapped into higher capacity models. 2. It seems that correspondence pruning should have a large impact on the performance of the model as well, but not a lot of attention is given to this area (e.g. in ablation studies); it seems that only some qualitative results are shown in Fig 4 and in the appendix. However, what is the actual performance of the prediction of Y_i (inlier / outlier labeling) in the different settings (in-scene, cross-scene, cross-dataset, etc)? Does this have a large impact on the performance? Technical Quality: 3 Clarity: 4 Questions for Authors: It would be great to read the authors’ responses to the issues raised in the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors discussed the limitations of their work well with respect to the requirements in the Checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for these comments. W1. Network capacity and complexity: We address this question in Table R.1. While our model uses more parameters than NCMNet and BCLNet, which use graph attention architectures, it is 4-6 times faster than these methods and consumes less GPU memory at inference. W2. Pruning and classification evaluation: We discuss the effect of correspondence pruning in our paper (ablation is mentioned in Section 4.4, lines 246-255). Similar to MGNet, we see that pruning has a slight negative effect (~-0.8%) on accuracy. We further attach Table R.3 to show classification accuracy. It can be seen that our model achieves higher F1 scores compared to previous methods, possibly explaining our overall improved regression accuracy. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. It is great to see that the proposed model is more efficient than the baselines, and that the inlier/outlier classification seems to work well. I have no other concerns. Consequently, I've also adjusted the score up by 1 point. --- Reply to Comment 1.1.1: Comment: Thank you for this positive feedback!
Rebuttal 1: Rebuttal: We thank the reviewers for their comments. We addressed each of your questions individually. Please note the attached pdf. Pdf: /pdf/f7741c49934fbcb69e1799bd936ceb843208495b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Memorize What Matters: Emergent Scene Decomposition from Multitraverse
Accept (spotlight)
Summary: This paper presents a 3D Gaussian mapping framework that is able to convert multitraverse videos from the same region into a environment while segmenting out 2D ephemeral objects. Leveraging the mlutitraverse data, the scene decomposition emerges in an unsupervised manner, as a result of the consensus in background and dsisensus in transient foreground across traversals. The paper demonstrate promising results in object segmentation, 3D reconstruction, and neural rendering. Strengths: + The paper is overall well written and easy to follow. + The proposed approach is self-supervised, which does not rely on any object annotations, and hence holds good potential for scalability. + The paper demonstrates the value of multitraverse data in scene decomposition, which is relatively under exploited in prior work. Weaknesses: - The method performs segmentation and finally GS mapping sequentially, which means the error in segmentation would propagate into the subsequent GS mapping. Why not perform multiple rounds of segmentation and GS mapping to continue improving the robustness against outliers? - The novelty of the paper is somewhat limited. Similar idea of emergent scene decomposition has been demonstrated in prior work EmerNeRF, where DINO features are also used. This paper differs mostly in using multitraverse data, and adopting 3DGS as scene representation instead of NeRF. - In object segmentation task, the paper compare with unsupervised segmentation method, but lacks comparisons with object discovery methods. As noted in Sec. 6, the proposed method is highly relevant to object discovery. - The comparison with STEGO and CAUSE is in a sense unfair because they do not use multitraverse data. A simple baseline would be performing a matching (e.g. with SIFT) for each segmented region with other traversals, a failure of which indicates the region being transient and vice versa. - For 3D environment reconstruction, only depthanything is compaired, which is however a single-image depth estimation, whereas the proposed method is multiview based. How about comparing with SOTA multiview stereo approaches? In addition, why is Chamfer Distance adopted as evaluation metric? Chamfer distance is a weaker metric as it does not leverage correspondence to the ground truth, but here we do have pixel-level correspondence by projecting Lidar to the image plane. Lastly, is the depth evaluation carried out only on background regions? There is no explanation on this regard. - As shown in Table 3, the neural rendering performance of EnvGS is only marginally better than the original 3DGS. It is even lower than 3DGS in PSNR. Technical Quality: 2 Clarity: 3 Questions for Authors: The questions to be answered are detailed in the weakness section, mainly on stronger baselines in object segmentation, and evaluation on 3D mapping. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper has discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive feedback and helpful suggestions. Please find our detailed response below. --- *Q1. Multiple rounds* We have added one additional round of segmentation and mapping by incorporating the emerged masks into COLMAP to remove transient objects during Gaussian initialization. This resulted in improved rendering scores, with PSNR increasing from 23.11 to 23.26 and SSIM from 0.8299 to 0.8372 in 1000 images at one location. We will include more experiments using multiple rounds. We appreciate your insightful comments. --- *Q2. EmerNeRF* We would like to emphasize that our method is fundamentally different from EmerNeRF, not just in using multitraversal data and 3DGS. Although both our method and EmerNeRF address self-supervised scene decomposition, the key ideas are entirely different: **EmerNeRF learns static-dynamic separation by effective parameterization of corresponding fields and leverages scene flow as an inductive bias, whereas our method leverages multitraversal consensus as a self-supervised signal to decompose transient objects from the permanent environment.** As a result, our method can decompose not only dynamic objects but also static yet movable objects, e.g., parked cars. In contrast, EmerNeRF can only decompose moving objects. Besides, **our method only needs camera input while EmerNeRF requires LiDAR. Our insight is to leverage more camera observations to boost the reconstruction performance.** In summary, compared to EmerNeRF, our method has three differences: * Our method introduces the **conceptually novel idea of multitraversal consensus-based scene decomposition**. * Our method provides a **more powerful scene decomposition** approach, capable of decomposing both moving and static yet movable objects. * Our method adopts a more **cost-effective** sensor setup. We have added quantitative and qualitative results of EmerNeRF. We found that EmerNeRF cannot remove static cars because it requires motion to decompose dynamic parts. Additionally, the decomposition performance is not ideal, likely due to the monocular camera input. The segmentation IoU is **7.3%**, which is significantly lower than our method (>40%). Visualizations are shown in Figures 6 and 7 in the PDF under the global rebuttal. We hope our clarifications address your concerns regarding the novelty of our work. *As agreed upon by the other two reviewers, our method is an innovative approach with key insights.* --- *Q3. Object discovery* We agree that adding comparisons to object discovery methods would make our experiments more comprehensive. We have included results from FOUND [1] which is a SOTA unsupervised object discovery method and found them to be much worse than our method, with an IoU of only **12.77%**. Some qualitative examples are shown in Figure 8 in the PDF under the global rebuttal. **Another notable advantage of our method compared to object discovery methods is that it not only discovers objects in 2D but also obtains a 3D representation of the static environment.** We will add more results and an expanded literature survey. [1] Unsupervised object localization: Observing the background to discover objects. CVPR 2023. --- *Q4. Segmentation baselines* We agree that adding more baselines using multiple traversals would make our experiments more comprehensive. We have added another baseline method: we directly matched DINO features across traversals and identified patches with few matching counterparts in other traversals as transient. The results were less convicing, with an IoU of **10.17%**. Visualizations are shown in Figure 9 in the PDF under the global rebuttal. We would like to emphasize that the key difference between our method and existing methods is that **we lift 2D to 3D, which facilitates the identification of 2D objects through 3D representation learning.** As a result, *our method not only segments objects in 2D with high quality but also obtains a 3D environment map.* We will include more baseline results and discussions. --- *Q5. Multiview stereo and depth metrics* Our method is a 3DGS-based approach that integrates both geometric and photometric information. It can be used not only for depth estimation but also for various tasks such as rendering and segmentation. Therefore, **our method is more versatile compared to monocular or multiview depth estimation methods.** Additionally, our method is a camera-only solution, whereas learning-based depth estimation methods require LiDAR-supervised training. Hence, **our method requires a more cost-effective sensor setup compared to depth estimation methods.** We have also added a SOTA multiview stereo baseline, DUSt3R [1]. We agree that including depth metrics makes the evaluations more comprehensive. The mean depth RMSE of our method is 7.376m, while DepthAnything is 13.562m, and DUSt3R is 13.37m (background regions). Some qualitative examples of DUSt3R are shown in Figure 10 in the PDF under the global rebuttal. These results demonstrate that our method remains the best without the need for LiDAR sensors. We will add the mutliview stereo results and depth metrics in the final version of our paper. **Note that our method and depth estimation can be complementary: our method can provide pseudo ground truth for depth estimation, while depth estimation can aid in the initialization stage of our method. Future work includes leveraging multitraversal to enhance depth estimation methods and utilizing depth estimation for efficient 3DGS initialization.** [1] DUSt3R: Geometric 3D Vision Made Easy. CVPR 2024 --- *Q6. Table 3* Table 3 does not evaluate pixels corresponding to transient objects, as we do not have ground truth background pixels in regions occluded by transient objects. Therefore, the observed gap is not too large. However, our method performs significantly better in occluded regions, as shown in Figures 6 and XIV.
Summary: The paper presents a novel approach for self-supervised scene decomposition using multi-traverse camera data, which results in a high-quality static background scene reconstruction via Gaussian Splatting. The method 3D Gaussian Mapping leverages repeated traversals and feature distillation to capture the emergent focus on 2D consensus structures and, therefore, dynamic foreground masks, which contribute to mitigating the disturbance of temporal dynamic objects on visual-based 3D mapping. Along with the proposed benchmark, this method paves an interesting direction, leveraging traversals to learn the inherent structure of background in autonomous driving, which finds wide applications in 3D map construction, map change detection, driving simulation, etc. Strengths: 1. **Innovative Approach**: The combination of multitraversal and Gaussian Splatting is a fresh and compelling method. It effectively decomposes an urban scene into static and dynamic elements merely via visual features and image rendering, which is an innovative idea for the urban scene decomposition task. 2. **Method of Simplicity and Effectiveness**: The paper combines and adapts the latest feature distillation, denoised DINOv2, and 3D reconstruction methods, Gaussian Splatting, to accomplish self-supervised dynamic component segmentation through emergent outweighing effect by multitraversal data. Visual feature residuals resulting from rendering results are effectively utilized to extract contours of the ephemeral elements simply by spatial gradient with the following postprocessing. 3. **Comprehensive Evaluation and Ablation Study**: The evaluation section is thorough, demonstrating the method's effectiveness across three tasks, i.e., 2D segmentation, 3D reconstruction, and 3D mapping. The abundant qualitative and quantitative results show robust performance under diverse conditions and improvements over existing techniques, particularly in ephemerality decomposition and 3D scene geometry learning. An encompassing ablation study shows the influence of traversals and selection of hyperparameters on segmentation tasks. Weaknesses: 1. **Assumptions on Environmental Stability**: The method strictly assumes a stable environment without major geometry change under consistent illumination and weather. This assumption might not hold in wider real-world scenarios, potentially affecting the method's robustness and generalizability. 2. **Lack of Failure Case Study**: Even though the paper discusses some failure cases regarding shadow, occlusion, reflection, etc. It's still interesting to see the influence of weather and large illumination changes on this approach and if increasing the number of traversals can tackle the issue. 3. **Lack of Comparative Baselines Regarding Env Reconstruction and Rendering**: Although the paper provides a solid evaluation of all tasks, it could benefit from more comparative analysis with state-of-the-art urban scene reconstruction methods, especially regarding static environment representation. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. **Additional Comparison to Urban Scene Reconstruction Methods**: Some comparisons to the latest urban scene reconstruction methods, like EmerNeRF, HUGS, NeuRad, and so on, regarding the static env reconstruction and rendering quality, especially to the results of actor removal. This can serve as a broader discussion on the effect of multitraversals and sensor types. 2. **Failure Case Study**: Despite the strict assumption of this approach, it's still compelling to discuss the effect of environment and illumination change on this approach. 3. **More Details on Training and Rendering Procedure** This paper offers a detailed workflow as an overview of the whole approach. However, directly embedding Dino features in 3DGS can have side effects on training and rendering, especially on this large scene. More details on hyperparameter selection and the training process can be helpful for further work in this field. Overall, this paper presents a promising and innovative advancement in scene decomposition from multitraverse data, with several strengths that make it a valuable contribution to the field of computer vision. Addressing the noted weaknesses could further enhance its impact and generality. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable feedback and insightful suggestions. Below is our detailed response. --- *Q1: Assumptions on Environmental Stability: The method strictly assumes a stable environment without major geometry change under consistent illumination and weather.* We agree with you that relaxing the assumption of environmental stability can further enhance the impact and generality of our method. We have made two efforts to address your concern: * We have tested our method under more **challenging illumination and weather conditions, such as night, foggy, and rainy** scenarios. Specifically, we added images from one additional traversal under adverse conditions to the multitraversal collection. Surprisingly, our method still performed well in unsupervised segmentation, even under night, foggy, and rainy conditions, as shown in Figure 1-3 in the PDF under the global rebuttal. **This demonstrates that our method is robust against inconsistent illumination and weather conditions, including night, fog, and rain.** * We have incorporated a learnable traversal embedding into the 3DGS. This effectively models the appearance changes across traversals. Due to the limited time for the rebuttal, we present preliminary results in Figure 4 in the PDF under the global rebuttal. From the top left to the bottom right, it shows the rendering results by interpolation between two traversal embeddings. We will include these experiments in the final version of our paper. Additionally, we would like to emphasize that **3DGS for self-driving with multiple traversals is an unexplored research topic, especially considering significant environmental changes (adverse weather). There are many open questions in this domain, and we believe our dataset and method can serve as a benchmark and baseline to inspire further research on this topic.** --- *Q2: Lack of Failure Case Study: Even though the paper discusses some failure cases regarding shadow, occlusion, reflection, etc. It's still interesting to see the influence of weather and large illumination changes on this approach and if increasing the number of traversals can tackle the issue.* We agree with you that discussing the influence of weather and large illumination changes would make our experiments more comprehensive. We have conducted additional experiments with traversals collected under challenging weather and illumination conditions. As discussed above, we find that **our method demonstrates good robustness against diverse conditions**. Note that these experiments use 11 traversals, and decreasing the number of traversals will degrade segmentation performance, similar to the findings in Ablation Study C.4 in our paper (Ablation Study on Number of Traversals: Visualization and Discussion). In addition, we have also added one traversal with snow on the road. We observed that snow on the road can also be segmented, as it appears only in a single traversal, as shown in Figure 5 in the PDF under the global rebuttal. We do not consider this a "failure case" since snow is transient. We will include these results in the final version of our paper. --- *Q3: Lack of Comparative Baselines Regarding Env Reconstruction and Rendering: Although the paper provides a solid evaluation of all tasks, it could benefit from more comparative analysis with state-of-the-art urban scene reconstruction methods.* We agree that adding additional comparisons to urban scene reconstruction methods would enhance our experiments. There are several fundamental differences between our method and existing ones such as EmerNeRF, HUGS, and NeuRad. We will include these comparisons in the final version of our paper to provide a broader discussion on the effect of multitraversals and sensor types. * **Input and Output**: These works take a single-traversal video as input, which limits their ability to reconstruct static environments occluded by static yet transient objects, such as parked cars. *In contrast, our method, based on multi-traversal input, can reconstruct a static environment without any movable objects (including both dynamic and static vehicles).* * **Sensor Requirements**: These methods require LiDAR point cloud data as input. In contrast, our method only uses RGB images, using a more *cost-effective and portable* sensor setup. * **Supervision**: HUGS and NeuRad require 3D bounding boxes as input, making them not fully self-supervised methods. *Our method, however, does not rely on such external supervision.* We have added quantitative and qualitative results of EmerNeRF, a self-supervised method, for a fair comparison. We ran the original codebase of EmerNeRF with both LiDAR and camera inputs on each traversal in our dataset. * **Actor Removal**: EmerNeRF cannot remove static cars because it models static cars as backgrounds and only segments objects with motion. Besides, the decomposition performance is not ideal, likely due to the monocular camera input. The segmentation IoU is **7.3%**, which is significantly lower than our method. * **Adverse Weather**: The performance under adverse weather conditions is unsatisfactory, as shown in the rainy day example in Figure 4 in the PDF under the “global” response. * **Rendering Quality**: The rendering quality is also inferior to our method, as shown in Figure 6 and Figure 7 in the PDF under the global rebuttal. We will include more quantitative results in the final version of our paper. --- *Q4: More Details on Training and Rendering Procedure.* We agree that providing more details can be valuable for further work in this field. Most of our method follows the hyperparameters of 3DGS. Empirically, we found that the feature rendering loss function plays a crucial role in scene decomposition: the KL divergence loss function performs much better than the L1 loss. We will report more ablation studies in the final version of our paper and release our code to support further research. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Hi, I appreciate the detailed responses to my concerns. Thanks for agreeing with me in many aspects. I am happy to stick on my initial rate. --- Reply to Comment 1.1.1: Title: Thank you for recognizing our rebuttal and contributions to the field. Comment: Dear Reviewer 3Lqq, Thank you for your follow-up comment and for acknowledging our detailed responses to your concerns. We are glad that our additional experiments and explanations addressed your concerns. We also appreciate your recognition of our contributions to the field. We respect your decision to maintain your initial rating and are grateful for your thoughtful review and feedback, which has helped us improve our paper. Thank you again for your time and effort in reviewing our work! Best regards, NeurIPS 2024 Conference Submission3651 Authors
Summary: The paper proposes a method called 3DGM that performs foreground-background disentangled 3D reconstruction by capturing the consistent parts from multi-traverse videos. 3DGM leverages 3DGS as the scene reconstruction algorithm, using only camera images as input, and achieves decoupled reconstruction of the 3D environment and 2D object segmentation. Additionally, the paper introduces a new dataset that combines Ithaca365 and nuPlan, which is used to evaluate unsupervised 2D segmentation, 3D reconstruction, and neural rendering. To be specific, the author has observed that self-driving cars often traverse the same routes repeatedly, encountering new pedestrians and vehicles each time, similar to how humans encounter different people every day in the same 3D environment. Inspired by the fact that humans are better at remembering permanent structures and forgetting transient objects, the author proposes the idea of developing a mapping system that can identify and memorize the consistent structures of the 3D world through multiple traversals without the need for human supervision. The key insight is that while the specific pedestrians and vehicles change from one traversal to the next, the underlying 3D environment (buildings, roads, etc.) remains the same. By leveraging this observation, the proposed system could learn to filter out the transient objects and focus on mapping the permanent structures, akin to how humans naturally encode spatial knowledge. The goal is to create a self-supervised system that can build a robust 3D map of the environment simply by repeatedly navigating through it without requiring any manual labeling or annotations. Strengths: 3DGM is an unsupervised approach that doesn't require any additional manual annotations. The reconstruction process relies solely on camera input, without the need for depth measurements from sensors like LiDAR. Employing 3DGS enables much faster reconstruction speeds for implicit neural rendering. The authors also contribute a new multi-traverse video dataset. Weaknesses: The end-to-end process for 3DGM appears to be quite complex, with multiple stages involved. However, the paper fails to provide a lucid explanation of the nitty-gritty details involved in implementing the algorithms. Without the availability of the accompanying source code, it would be an uphill task for others to replicate the results or build upon this work effectively. Technical Quality: 4 Clarity: 3 Questions for Authors: Please check the Weaknesses and Limitations Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Firstly, although this is a self-supervised approach, in practical applications, multiple passes along the same road are still required for reconstruction. Could some prior information be incorporated to enable the model to infer the background and complete the task with fewer traversals? Secondly, the methods based on 3D reconstruction have high data quality requirements, and the 3DGS-based methods need a reasonably accurate initial pose estimation (by COLMAP), which further exacerbates the demand for high-quality data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your insightful feedback and suggestions. Please review our detailed response below. --- *Q1. The end-to-end process for 3DGM appears to be quite complex, with multiple stages involved. However, the paper fails to provide a lucid explanation of the nitty-gritty details involved in implementing the algorithms. Without the availability of the accompanying source code, it would be an uphill task for others to replicate the results or build upon this work effectively.* We would like to emphasize that, although our method has multiple stages, each module is **easy to deploy** and **computationally efficient**. As discussed in the paper, using a feature map with a resolution of 110×180 requires only 2,000 iterations to achieve an IoU score exceeding 40%, taking approximately 8 minutes on a single NVIDIA RTX 3090 GPU for 1,000 images from 10 traversals of a location. Future work will include investigating more efficient initialization methods. A detailed workflow of the overall method is provided in Appendix A. To support reproducibility and further research, we will release our source code and dataset upon acceptance of our work. --- *Q2. Firstly, although this is a self-supervised approach, in practical applications, multiple passes along the same road are still required for reconstruction. Could some prior information be incorporated to enable the model to infer the background and complete the task with fewer traversals?* We agree that incorporating prior information can further enhance the efficiency of the proposed method. For example, if past traversals with both LiDAR and camera data are available, the 3D reconstruction could benefit significantly from such prior information. **In fact, our method could serve as a strong camera-only baseline for constructing such a scene prior, facilitating scene decomposition and reconstruction in future traversals of the same location.** *We leave the exploration of prior information for future work, as it requires a complete reformulation of the problem, which is non-trivial and beyond the scope of this work.* **Future research opportunities include developing algorithms that seamlessly integrate prior data, exploring the impact of different types of prior information on reconstruction quality, and creating frameworks that can dynamically adapt to changes in new traversals.** We will include more discussions on this in the final version of our paper. In addition, we would like to emphasize that the setup of multiple passes of the same location is quite feasible, as vehicles typically operate within the same spatial region. As shown in a recent publication at CVPR 2024 [1], it is possible to obtain hundreds of traversal data for the same location within several days. --- *Q3. Secondly, the methods based on 3D reconstruction have high data quality requirements, and the 3DGS-based methods need a reasonably accurate initial pose estimation (by COLMAP), which further exacerbates the demand for high-quality data.* Our method only requires **monocular** RGB images with a resolution of around **900x600**, which can be **easily obtained and is the most cost-effective data collection method**. *No other sensor data, such as LiDAR point clouds, GPS, or IMU data, are needed.* Additionally, **data collection by multiple traversals of the same location is highly feasible, as demonstrated by existing datasets collected by either academic labs or industry companies, such as Ithaca365, nuPlan, and MARS [1].** In addition, we have found that multitraversal RGB images can facilitate COLMAP initialization, producing very accurate camera poses. Based on our empirical studies, **COLMAP initialization requires only 2 or 3 traversals with only monocular RGB images** to significantly improve the success rate compared to single-traversal scenarios. &nbsp; [1] Li, Y., Li, Z., Chen, N., Gong, M., Lyu, Z., Wang, Z., Jiang, P. and Feng, C., 2024. Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22041-22051).
null
null
Rebuttal 1: Rebuttal: ## Global Rebuttal We sincerely thank all the reviewers for their insightful comments. We appreciate the positive feedback: **fresh and compelling, innovative approach, evaluation section is thorough, promising and innovative advancement in scene decomposition, valuable contribution to the field of computer vision, well-written and easy to follow, and good potential for scalability.** Major concerns raised by our reviewers are (1) data requirement, (2) robustness against lighting and illumination changes, (3) comparison to HUGS and NeuRad, (4) comparison to EmerNeRF, (5) comparison to unsupervised object discovery and segmentation, and (6) comparison to depth methods. We summarize our responses below to fully address these concerns. --- ### 1. Data Requirement Our method only requires **unposed monocular RGB images** with a resolution of **900x600**, which can be easily obtained and is the **most cost-effective** data collection method. *No other data, such as LiDAR point clouds, GPS, and IMU data, are needed*. Additionally, data collection by multiple traversals of the same location is highly feasible, as demonstrated by existing datasets collected by either academia or industry, such as Ithaca365 [CVPR'22] by Cornell University, nuPlan [ICRA'24] by Motional, and MARS [CVPR'24] by May Mobility. --- ### 2. Robustness Against Lighting and Illumination Changes We have tested our method under challenging conditions such as **night, fog, snow, and rain**. **The results demonstrated our method's robustness against diverse illumination and weather conditions** (see Figure 1-3). We also incorporated a learnable traversal embedding to model appearance changes across traversals. Preliminary results show that this approach effectively handles appearance changes (see Figure 4), and we will include these results in the final paper. --- ### 3. Comparison to HUGS and NeuRad Our method differs from HUGS and NeuRad in several key aspects: - **Input and Output:** HUGS and NeuRad require single-traversal videos, which limits their ability to reconstruct static environments **occluded by static yet transient objects**. Our multitraversal input approach overcomes this limitation. - **Sensor Requirements:** HUGS and NeuRad **need LiDAR point cloud data**, whereas our method only requires RGB images. - **Supervision:** HUGS and NeuRad **require 3D bounding boxes**, making them not fully self-supervised. Our method does not rely on external supervision. We believe the above fundamental differences can already distinguish our method from these supervised and LiDAR-based methods. We will add more discussions. --- ### 4. Comparison to EmerNeRF EmerNeRF and our method both address self-supervised scene decomposition, but they differ in three aspects: - **Key Idea:** EmerNeRF uses **scene flow** as a self-supervised signal to separate dynamic objects from static backgrounds, while we leverage **multitraversal consensus** as a self-supervised signal for transient-permanent decomposition. - **Sensor:** EmerNeRF requires **LiDARs**, while we **only use RGB images**, providing a more cost-effective and portable solution. - **Functionality:** EmerNeRF can only decompose **moving objects**, whereas our method can handle **both dynamic and static yet movable objects**, e.g., parked vehicles. We demonstrate that our method outperforms EmerNeRF, which requires both camera and LiDAR input. Our approach achieves a segmentation IoU of over 40%, compared to EmerNeRF's 7.3%. While EmerNeRF struggles with static cars and produces noisy decompositions (see Figure 6-7), our method maintains high performance even in adverse weather conditions. --- ### 5. Comparison to Unsupervised Object Discovery and Segmentation We compared our method to FOUND [CVPR'23], a state-of-the-art unsupervised object discovery method, which showed inferior performance with an IoU of 12.77% (see Figure 8). We also implemented a baseline using DINO-based feature matching across traversals. However, this approach resulted in poor and noisy segmentations, with an IoU of 10.17% (see Figure 9). **Note that our method not only discovers/segments objects in 2D with high quality (with an IoU of >40%) but also reconstructs the 3D environment, offering a significant advantage.** **Our key novelty is to lift 2D to 3D so that we can achieve more accurate 2D segmentation by learning representations in 3D.** --- ### 6. Comparison to Monocular and Multiview Depth Estimation Methods We would like to emphasize that our method is a 3DGS-based approach that can be used not only for depth estimation but also for various tasks such as rendering and segmentation. Therefore, **our method is more versatile compared to monocular or multiview depth estimation methods**. Additionally, our method is a camera-only solution, whereas learning-based depth estimation methods require LiDAR-supervised training. Hence, **our method requires a more cost-effective sensor setup compared to depth estimation methods.** *Note that our method and depth estimation can be complementary to each other: our method can provide pseudo ground truth for depth estimation, while depth estimation can facilitate the initialization stage of our method.* In addition, we have compared our method to DUSt3R [CVPR'24], a state-of-the-art multiview stereo method (see Figure 10). Our method achieved a mean depth RMSE of 7.376m, significantly better than DepthAnything (13.562m) and DUSt3R (13.37m). These results will be included in the final version of our paper. --- ### Conclusion In summary, we are the first to achieve **simultaneous 2D segmentation and 3D mapping with pure self-supervision using camera-only input**. *Previous works typically addressed these problems separately and required external supervision or LiDAR sensors.* We believe our method and dataset will inspire future research and significantly contribute to the field. We will release all resources to the community upon acceptance. Pdf: /pdf/c0d748380677fbb8ce287da79aeba230e74d992e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unveiling Transformer Perception by Exploring Input Manifolds
Reject
Summary: The authors propose a mathematically rigorous methodology based on Riemannian geometry for attributing network importance of tokens in a transformer models input space (e.g. image patches, or ~words in the textual domain). The proposed methodology—whilst based on sound theory—translates into an intuitive algorithm involving what appears to be a relatively inexpensive eigendecomposition. Experiments on 3 datasets across both the image and NLP domains explore how the features correlate with ground-truth inputs in the text domain, in addition to first steps towards exploring how the features affect the networks’ output logits. Strengths: - A major strength of the paper is the mathematically solid approach in attempting to identify regions of the **input** space that explain transformers’ model decisions. This is an important area of study: in contrast to many recent mechanistic interpretability methods finding latent network representations (that are intrinsically hard for humans to interpret by default), salient features in the pixel/text input space are much more readily interpreted by humans. - Whilst I am unfamiliar with geometric deep learning, the authors do a fantastic job of presenting the technical content in a digestible manner without sacrificing depth or rigor. Weaknesses: # [W1] Feature importance comparisons Feature importance-based explanations are motivated on [L303] as quantifying the contribution of features `"to a model prediction"`. More concretely, around [L180], the authors motivate the eigenvalues of the pullback metric found using their method as ultimately deducing the importance of each segment (e.g. image patch) `“with respect to the the final prediction”`. Consequently, a major weakness of the paper is how there is no comparison with related work for how well the proposed method’s identified important features alter the **output** logits (e.g. upon ablation). I am slightly confused by why the authors did not adopt the established “perturbation test” experimental protocol in the baseline [1] against which they compared, to provide experimental evidence in favor of this. Currently, the only comparisons made around [L274] measure the features’ importance as they correlate to the *input’s* labels. Concretely, the authors could, for example, ablate particular patches of MNIST and observe that the resulting performance drops correlate with the pullback metric’s eigenvalues. This would provide stronger evidence of the authors’ claims about the features affecting the networks’ output, and (crucially) ground the results in contrast to those achievable by existing methods. # [W2] Limited experimental results & improvements There is a lot of interesting theory here, but ultimately this is a paper with a concrete applied goal of feature attribution in transformer models. With such a new methodology with many technical details, I believe there is an extra burden of proof on the authors to demonstrate this somehow leads to additional insights / practical gains. As such, it is a relative weakness of the paper that so few experiments are performed to justify the methodology. Beyond toy datasets, it would be interesting to see how the method performs on more complex ones (not necessarily larger ones), such as TinyImageNET. Here, we could visualize much more easily if the method helps identify salient features of animals’ body parts (for example) as being important features for classification. MNIST experiments alone in the image domain are hard to interpret given the similarity of all the input data. Furthermore, the method provides an almost insignificant increase of just `0.07` cosine similarity (over the baseline in [1]), on just a single dataset (and with just two baselines—for example, how does GradCAM perform here?). This is not sufficient evidence to convince me as a reader that the proposed methodology should be adopted. --- - [1]: Chefer, Hila et al. “Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021): 387-396. Technical Quality: 3 Clarity: 4 Questions for Authors: No additional questions at this time (beyond those alluded to in the weaknesses section). Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Some limitations are indeed addressed throughout. However, (unless I have missed something, in which case I apologise!) I can only find the limitations of the small number of datasets used stated in the NeurIPS checklist. This needs to be stated explicitly in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Feature importance comparisons** We did not perform an in-depth analysis of the feature importance extraction because it was not the main focus of our work. The primary goals of this work are twofold: first, to theoretically model the exploration of a Transformer's input space, and second, to implement this theoretical results using SiMEC, SiMExp, and input exploration techniques to be found in Section 3. These objectives are stated in lines 7-8, 12-13, 20-23, 319-323. However, the reviewer's proposal concerning the effect of ablation on the feature importance analysis is indeed interesting. We expect that ablations -- for example covering one patch of an picture with a black square in a image classification task -- do have an impact on the identification of important features. In this classification example, the mathematical model depicted in the first half of the work guarantees that modifying patches with high eigenvalues leads to a greater change in output compared to patches associated with low eigenvalues. On the other hand the pullback metric formula (5) suggests that if the network is very sensible to a change of a particular patch, then also the metric can change more. These two facts allude that the ablation or perturbation of important features may affect their eigenvalues more compared to features which are less important. Apart from these qualitative considerations, understanding exactly how ablation affects the importance of certain features requires both a theoretical and an empirical analysis. At this stage these are out of the scope, but it could be a building block for a future work. **[W2] Limited experimental results & improvements** We acknowledge the simplicity of our experimental methodology. In light of the aforementioned goals, our intention was not to introduce a new technique for feature importance, but rather to demonstrate the potential empirical applications of our approach and to discuss how our proposed tools can facilitate Explainable AI tasks. We selected the MNIST dataset for our experiments due to its simplicity and popularity, which we believe facilitates easier interpretation by the reader. The well-known differences between handwritten digits in MNIST make it straightforward to understand how changes in input data can alter outputs (e.g., adding a straight line to the digit '5' to make it resemble a '9'). We agree that TinyImageNET could have been an interesting dataset for our experiments; however, we prioritized clarity in our results over potentially more intriguing examples. As already stated, our main goal is not to convince the reader to adopt our method for assessing feature importance, but to show that our theory works correctly on the Transformers input space and for that reason produces an outcome that is competitive with State-of-the-Art techniques in the field of feature importance-based explanations. In particular, the two baselines were chosen with the goal of comparing our method with a well-established method specifically designed for Transformers (i.e. Attention Rollout) and a method that was recently proven to outperform many others, including Grad-CAM (see reference [6]). --- Rebuttal Comment 1.1: Title: Thanks to the authors Comment: Thanks to the authors for their response. I appreciate the authors’ efforts to clarify the goals and claims of the paper in their first response. Unfortunately, I am still not convinced as it stands that the limited experimental results on MNIST sufficiently demonstrate the authors’ claim made in the rebuttal to `"demonstrate the potential empirical applications of our approach"` . I do not agree with the authors that adding additional results would have traded-off clarity (these can always be deferred to the supplementary material, for example). Furthermore, two additional reviewers have the same flavor of concerns—it would have been easy for the authors to conduct small experiments on any of the additional suggested datasets (such as CIFAR by **Reviewer-TeNK** or TinyImageNET as suggested in the initial review). In the absence of additional experiments, my rating currently remains the same.
Summary: This work attempts to find the set of inputs that generate the same neural network predictions. To this end, the authors interpret the layers of the network as transformations of the input manifold. This interpretation is used to defined equivalence classes over the inputs and to define feature importance. Finally, the tools are used to identify equivalence classes for MNIST digits and for hate speech detection with BERT. Strengths: Section 2 does a thorough job of introducing a manifold interpretation to neural networks. This introduction is then used to motivate multiple algorithms for finding equivalence classes of the inputs---or the setup of inputs that result in the same prediction---and identify features that are important. Weaknesses: The main contribution of this work is to introduce a tool for analyzing which set of inputs produce the same output. However, this is exactly the Fisher information matrix (with respect to the inputs) and has also been introduced in prior work (https://arxiv.org/abs/2104.13289). Could the authors clarify what the differences are and what the additional novelty. If the "local data matrix" introduced in https://arxiv.org/abs/2104.13289 is identical to the tools in this work, I think it severely diminishes the contributions of this work. Furthermore the experiments are extremely similar (such as Figure 1). The second weakness is the limited number of experiments. The work does not show any quantitative results: Figure 1, Figure 2 and Figure 3 is just 1 example and is not indicative of why the tools are useful. The experiments in section 4 primarily discuss wall-clock time. It would significantly help if claims such as "(Line 266) we notice that the perturbation-based algorithm ends up producing monochrome ..." are substantiated quantitatively. Overall, the work doesn't provide novel tools and the experiments lack a novel usage of these tools and do not reveal any new insights. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How does this work differ from https://arxiv.org/abs/2104.13289? 2. What is algorithm 4 (exploration) used for? 3. Many experimental details are lacking. It would help to add appendices clarifying what networks were used and how they were trained. Overall, the work feels incomplete in many respects 4. For feature importance-based explanations, there are many other methods like LIME/SHAP (and many more follow up papers) which the authors do not compare to. Why were they omitted? 5. Does this method scale to larger datasets, considering we have to compute outer-product of the gradients and compute the Eigenvalues of this matrix? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The authors address limitations of their work but it can be expanded upon. For example, the authors can discuss the time required to compute Eigenvalues, and other limitations such as not having any Eigenvalues to be 0 in Algorithms 3 / 4. Furthermore, their algorithm should work (in theory) for infinitesimal steps in the input manifold. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We will address the questions in the same order as presented by the reviewer. **1. How does this work differ from https://arxiv.org/abs/2104.13289?** The two works present several differences, we enlist them down below. - The theoretical framework in our work can be applied for very general settings, there are no particular requirements on the loss function, while in the suggested work only the Kullback-Leibler divergence is considered. - We allow the employment of a large class of activation functions, while in https://arxiv.org/abs/2104.13289 only ReLu and softmax are considered. - We do not require that the matrix representing the pullback has constant rank: this is actually a consequence of the mild assumptions on our layers. - Our approach allows the study of the behaviour of \emph{fully trained} networks: of course one can apply our method also on partially trained ones. - On the contrary of https://arxiv.org/abs/2104.13289, we discuss the intensities of the eigenvalues, in order to employ them for feature importance. - The linked document exploits just the powerful, although simple, second order Taylor expansion of the loss, which hence must be at least belong to $\mathcal{C}^2(\Omega)$, being $\Omega$ the loss' domain. In our work, the theoretical foundations is based on proved theorems and general results: hence, our approach does not depend on the loss function nor on its differentiability. **2. What is algorithm 4 (exploration) used for?** Algorithm 4 outlines a specialized procedure for exploring the Transformer's input space, allowing for the selection of specific patches for exploration (see lines 189-195). This algorithm generates embeddings based on the exploration conducted over a set number of iterations. To make the changes between iterations perceptible to the human eye, a subsequent interpretation step is required. This is where Algorithm 5 comes in; it takes the embeddings produced by Algorithm 4 as its input. Consequently, when discussing the interpretation of the exploration process, we refer to Algorithm 5. As noted on line 3 of Algorithm 5, the algorithm explicitly requires the outputs from Algorithm 4 to function. **3. Many experimental details are lacking. It would help to add appendices clarifying what networks were used and how they were trained. Overall, the work feels incomplete in many respects.** The experiments utilize two Transformer networks: (1) a BERT model for hate speech detection, used without further training, as sourced from Hugging Face (huggingface.co/ctoraman/hate-speech-bert), as mentioned in Note 4; and (2) a Vision Transformer (ViT) model, consisting of 4 layers with 4 heads per layer, trained on the MNIST dataset using the Adam optimizer over 20 epochs, as described in Note 5. All the code used for modeling and training the networks is provided in the Supplementary Materials. **4. For feature importance-based explanations, there are many other methods like LIME/SHAP (and many more follow up papers) which the authors do not compare to. Why were they omitted?** Since feature importance-based explanations are not the primary focus of our work, we present them only as an application, or a collateral outcome, of our method, whose central goal is the exploration of the input space of Transformers. As a consequence, we simply show that our SiMEC-based feature importance explanations are competitive with State-of-the-Art methods commonly used in the context of Transformers. Techniques such as LIME and SHAP were judged relevant in the field of feature importance evaluation, but other techniques (i.e. Attention Rollout and the Relevancy method) were deemed more appropriate and up-to-date in the realm of Transformers. **5. Does this method scale to larger datasets, considering we have to compute outer-product of the gradients and compute the Eigenvalues of this matrix?** The key factors in estimating computational complexity and scalability are primarily related to the network architecture. As mentioned in lines 160-164, the most computationally intensive task is calculating the eigenvalues and eigenvectors, which has a complexity of $O(n^3)$, with $n$ the dimension of the square matrix $g^0_{p_k}$ (see reference 20), thus the embedding's dimension. Since all other operations have complexities of either $O(n)$ or $O(n^2)$, the overall complexity for both SIMEC and SIMExp is $O(n^3)$. Since the complexity depends on the embedding's dimension and not on the number of instances, the entire procedure scales linearly with respect to the number of instances processed in the experiments. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank you for the taking the time to respond to all the questions. However many of my concerns remain unresolved. I agree with the authors that there are some differences with prior work that uses the local Fisher/data matrix. However, the differences about the activation function or loss function seem of little relevance in the context of deep networks. Unless I am mistaken, the algorithm used in (https://arxiv.org/pdf/2104.13289) is identical to algorithm 1. As mentioned in the weakness section, the experiments are limited to small datasets and 3 of the 4 results are qualitative. The purpose of algorithm 4/5 is still not clear to me; I understand the steps of the algorithm, but I don't understand what goal it achieves and there are no results that show it can be used to derive new insights. The authors have introduced new tools / algorithms but I believe that there needs to be more thorough experimental validation. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their effort in replying to our rebuttal response. However, we still do not agree with the reviewer’s point of view about the novelty of both our mathematical framework and algorithms. We are confident in the distinctiveness of our work compared to https://arxiv.org/pdf/2104.13289. In addressing the reviewers' concerns about its novelty, we thoroughly examined both the mathematical foundations and the application to Transformer architecture. Our responses highlighted six key differences and clarified our methodological contributions, emphasizing the relevance of all presented algorithms. While there may be surface-level similarities, our algorithms are underpinned by unique mathematical proofs that ensure broader generalizability (without restrictions on loss functions and covering a wider range of activation functions), applicability to trained networks (and not only on partially trained ones), and the flexibility to operate without assuming a constant rank for the pullback metric. Based on our comparison, we believe there are substantial differences between the two works, and the divergence points that we highlighted provide strong evidences against the reviewer’s argumentation.
Summary: The authors present a method for exploring equivalence classes in the input space of Transformer models using a solid mathematical theory. By analyzing the Jacobian of the model, the method reconstructs and navigates these classes, offering a powerful tool for understanding Transformer interpretations and enhancing explainability. The proposed method is expected to solve problems in Computer Vision and Natural Language Processing tasks. Strengths: I write both strength and weakness. First, I must disclose that I have no prior study or background in both Transformers and manifolds. While I am conceptually aware of them, my knowledge is limited to that extent, and I lack confidence in reviewing the technical details. Therefore, please consider my review comments as feedback from a layperson in this field, focusing on the overall mathematical consistency and readability of the paper. This paper describes mathematics in a clear and understandable manner that even a layperson like myself can grasp. Each definition and theorem is stated accurately, and I believe that the general concepts can be understood with basic knowledge. I personally feel that the objective of this paper is not clearly conveyed. While the paper claims to contribute to explainability and sensitivity analysis through the analysis of input manifolds, the logic behind this was not clear to me in the Introduction and Preliminaries. Although the concepts of explainability and sensitivity analysis become clearer in the later chapters, it might be beneficial to provide a bit more explanation in the Introduction. Additionally, it might be helpful to clearly define the equivalence class mathematically. Since I am not familiar with the existing literature, I was unable to judge the novelty of this work. Weaknesses: See above. Technical Quality: 3 Clarity: 2 Questions for Authors: See above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The primary goals of this work are twofold: first, to theoretically model the exploration of a Transformer's input space, and second, to implement this theoretical results using SiMEC, SiMExp, and input exploration techniques to be found in Section 3. These objectives are stated in lines 7-8, 12-13, 20-23, 319-323. In light of these main goals, we also discuss how our proposed tools can facilitate Explainable AI tasks. Our experiments are not intended to introduce a new technique for feature importance, but rather to demonstrate the potential empirical applications of our approach.
Summary: This paper develops a novel theoretical framework grounded in Riemannian geometry for analyzing the input space of Transformer models, and introduce two algorithms, SiMEC and SiMExp, which facilitate the exploration and interpretation of equivalence classes within this input space. These methods offer new insights the internal mechanisms of Transformers, and provide new understanding of how these models perceive and process input data which can be very useful in the field of explainable AI. Strengths: 1 novelty: This paper provide an innovative application of Riemannian geometry to analyze the input spaces of Transformer models, which is very novel in the area. 2 Theory: This paper establishes a solid mathematical theory on how Riemannian geometry is applied to Transformer models. Based on this theory, SiMEC and SiMExp are developed to explore the input spaces of Transformer models. Weaknesses: In experiment, the MNIST dataset is a little bit trivial, as the pixels of the background is essentially zero. It is nice to see the application of the proposed algorithm on natural images like CIFAR. Technical Quality: 3 Clarity: 2 Questions for Authors: See above Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We acknowledge the simplicity of our experimental methodology. However, the primary goals of this work are twofold: first, to theoretically model the exploration of a Transformer's input space, and second, to implement this theoretical results using SiMEC, SiMExp, and input exploration techniques to be found in Section 3. These objectives are stated in lines 7-8, 12-13, 20-23, 319-323. In light of these main goals, we also discuss how our proposed tools can facilitate Explainable AI tasks. Our experiments are not intended to introduce a new technique for feature importance, but rather to demonstrate the potential empirical applications of our approach. We selected the MNIST dataset for our experiments due to its simplicity and popularity, which we believe facilitates easier interpretation by the reader. The well-known differences between handwritten digits in MNIST make it straightforward to understand how changes in input data can alter outputs (e.g., adding a straight line to the digit '5' to make it resemble a '9').
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token
Accept (poster)
Summary: This paper proposes xRAG, a method to map the retriever's embeddings into LM's representation space with one token. While both the retriever and LM are fixed, xRAG trains a simple projector to adapt to them by paraphrase pretraining and context-aware instruction tuning. xRAG can significantly outperform non-retrieval and previous context compression methods and matches the performance of the uncompressed one, saving the RAG FLOPs by more than 3 times. Strengths: - The compression rate from the long context to one token is high - The only trainable component constitutes less than 0.1% of the LLM’s parameters, which can serve as an efficient plug-in - xRAG can match or surpass the original RAG with only one token for one document, and the analysis of the Resilience Rate and the Boost Rate is reasonable and interesting - In general, the paper is well written Weaknesses: - The main experiment in this paper is conducted only on one retrieved document, whose average length is just 175, but I think this is a **very short context** for modern LLMs (and thus the inference time is not a bottleneck). I think proving the effectiveness of xRAG on a real-long context (more than thousands of tokens) of multiple documents is very important to see this method's real-world value - It is essential to consider the total training FLOPs of the projector. Although we don't need the actual optimization of LM and retriever, we also need the forward process of these models in the projector's training so that **the cost can be significant**, especially when the amount of total two-stage training samples is very, very large (near 3M) - The projector is **tightly coupled** with the retriever and LM. It is hard to prove the generalization abilities of xRAG to other black-box production LLMs where the projector cannot be trained directly from the LM's feedback, which may limit its potential impact - In Section 4.3, the requirements of the baselines include **no need for dataset-specific tuning**. However, how to explain that NQ, TriviaQA (two evaluation tasks) are included in the Context-aware Instruction Tuning data? - Missing baselines of context compression methods other than LLMLingua, like Gist-COCO. Will adding **more compression tokens** benefit the performance? [1]: Li, Xinze, et al. "Say more with less: Understanding prompt learning behaviors through gist compression." arXiv preprint arXiv:2402.16058 (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: - For plugging in multiple retrieved documents, should we devise another form of training task or simply adopt the same projector multiple times? This is an interesting discussion that I would like to see - How simple can the projector be? I would like to see the performance of different design choices of the projector, such as Q-Former mentioned in the paper - What is the data source for Paraphrase Pretraining? I cannot find it in the paper Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate your time and effort in reviewing our paper. Here, we would like to address your concerns point by point: - **About multiple documents:** Please refer to our general response about multiple documents in xRAG. - **The cost of training** The training cost of xRAG is minimal compared to full parameter tuning since the only trainable component is a two-layer MLPs. In our setup (8xA00), full parameter tuning of a 7B dense model with 1 million samples takes about 24 hours. In the same amount of time, xRAG (8x7b) can finish training on 2.9 million samples (paraphrase pre-training + context-aware instruction tuning). Moreover, by caching all the document embeddings, we can further speed up the training by 25%. We appreciate your suggestion and will include this information in the revised version of our paper. - **xRAG for black-box LLM** From the user side, it is true that xRAG can only be applied to models with open weights. However, with the increasing popularity of cloud vector databases, LLM API providers could still benefit from xRAG if users can provide the index of retrieved documents in the vector database. - **About dataset-specific tuning.** When we refer to "without dataset-specific tuning," we mean that we adopt a one-for-all evaluation setting where the evaluation results from various datasets come from the same model, as opposed to one-for-one evaluation which requires dataset-specific tuning. To exclude the impact of the corresponding training set (NQ and TQA), we refer to Table 4 of our paper, where similar results are obtained even with only reading comprehension data involved (NQ and TQA excluded). Additionally, the LoRA results in Table 3 confirm that the improvement of xRAG does not stem from task knowledge inherited during context-aware instruction tuning. - **Missing baselines of context compression methods other than LLMLingua, like Gist-COCO. Will adding more compression tokens benefit the performance?** Thank you for pointing out this missing reference. Similar to Gist Token [1], Gist-COCO relies on gist representation, which might not be applicable in real-world RAG settings involving millions of documents. We will add this paper to the final version of xRAG in both the Related Work section and Appendix A. Regarding the addition of more compression tokens, we believe it is a balance between efficiency and efficacy. In Appendix G, we discuss that xRAG could be combined with a multi-vector or Matryoshka embedding system where the compression ratio is configurable. We believe this is an exciting direction for xRAG. - **For plugging in multiple retrieved documents, should we devise another form of training task or simply adopt the same projector multiple times? This is an interesting discussion that I would like to see** Please refer to our general response about multiple documents in xRAG. - **How simple can the projector be? I would like to see the performance of different design choices of the projector, such as Q-Former mentioned in the paper** As discussed in Section 3.4, the key design principle is the simplicity of the system. To our knowledge, the simplest form for a projector is an MLP architecture. We leave the exploration of more advanced and complex architectures for future work. - **What is the data source for Paraphrase Pretraining? I cannot find it in the paper** Apology for the confusion. The data source for Paraphrase Pretraining is the same Wikipedia dump used as the retrieval corpus. We will make this clear in the revised version of our paper. [1] Mu, Jesse, Xiang Lisa Li and Noah D. Goodman. “Learning to Compress Prompts with Gist Tokens.” *ArXiv* abs/2304.08467 (2023): n. pag. --- Rebuttal 2: Comment: Thanks to the authors. I have read the rebuttal carefully. I decided to not change my scores since I feel the limitations of this work still outweigh reasons to accept. (1): The additional training cost seems not trivial; (2): The efficiency gain in super-long context (not only just 3 docs) is not included. --- Rebuttal Comment 2.1: Comment: Dear Reviewer, We sincerely appreciate your thoughtful feedback and welcome the opportunity to address the points you've raised. - **Concern: The additional training cost seems non-trivial** Similar to how LLaMA[1] "violates" the scaling law by prioritizing the inference budget, we believe that inference cost is of utmost importance for a context compression method, especially in real-world RAG scenarios involving millions of documents. This principle guided xRAG's design: by reusing existing document embeddings and introducing only a two-layer MLP into the existing LLM, we've minimized additional complexity. While our method prioritizes inference-time efficiency, it's worth noting that **the training cost is also minimal compared to existing compression methods such as ICAE[2] and LLMLingua[3].** The only trainable component in xRAG is the newly introduced MLP. We would greatly appreciate if you could point us towards any references to context compression methods with trivial training costs that we may have overlooked. --- - **Concern: The efficiency gain in handling super-long contexts (beyond just 3 documents) is not included.** RAG is widely recognized as a technique designed to alleviate the need for LLMs to process long contexts by chunking documents into smaller pieces and retrieving the most relevant ones. This is the typical operational model of modern RAG systems. As such, long-context processing and RAG represent two distinct paradigms for LLMs. Research has shown that useful information generally appears within the top-k documents, and RAG performance tends to plateau as more documents are involved [4] [5]. Consequently, handling super-long contexts falls outside the scope of RAG and, by extension, beyond the scope of xRAG. However, to address your concern about efficiency with more documents, we've conducted additional tests. If the "super-long" context you mentioned falls within the top-k chunks of RAG (where k is typically less than 10), here are the efficiency results for top-10 documents, following the benchmark setting outlined in Section 5.2 of our paper: | Top-10 chunks | CUDA Time (s) | GFLOPs | Peak Mem (GiB) | | ------------- | ------------- | -------------- | -------------- | | RAG | 3.58s | 10712.22 | 20.42 | | xRAG | 0.62s (x5.7) | 529.33 (x20.2) | 13.84 (x1.4) | As demonstrated, xRAG maintains significant efficiency gains even when processing a larger number of documents. We appreciate your insightful feedback and look forward to further discussion on these points. --- [1] Touvron, Hugo, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave and Guillaume Lample. “LLaMA: Open and Efficient Foundation Language Models.” ArXiv abs/2302.13971 (2023): n. pag. [2] Ge, Tao, Jing Hu, Xun Wang, Si-Qing Chen and Furu Wei. “In-context Autoencoder for Context Compression in a Large Language Model.” ArXiv abs/2307.06945 (2023): n. pag. [3] Jiang, Huiqiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang and Lili Qiu. “LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models.” Conference on Empirical Methods in Natural Language Processing (2023). [4] Lewis, Patrick, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel and Douwe Kiela. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” ArXiv abs/2005.11401 (2020): n. pag. [5] Shi, Weijia, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer and Wen-tau Yih. “REPLUG: Retrieval-Augmented Black-Box Language Models.” ArXiv abs/2301.12652 (2023): n. pag.
Summary: The paper on xRAG presents an innovative approach to context compression in retrieval-augmented generation, achieving significant efficiency gains while maintaining performance. The method's compatibility with various language models and preservation of the plug-and-play nature are notable strengths. However, the added complexity and dependency on high-quality embeddings might pose challenges. The whole paper is clear and easy to understand. Strengths: 1. This paper introduces xRAG, a novel context compression method that effectively merges document embeddings into the language model’s representation space through modality fusion, which is an attractive topic. 2. The xRAG achieves significant context compression compared with baselines. 3. The plug-and-play nature makes it easy to be applied with other backbone models. Weaknesses: 1. The introduction of a modality bridge and the requirement for a pretrained modality encoder adds complexity to the system. 2. Will the bias in projector tuning cause hallucinations? 3. The performance of xRAG is likely highly dependent on the quality of the dense document embeddings. 4. The compression method, while efficient, might result in the loss of information present in the original documents. Do we need such extreme compression since most of the queries are not too long? Or how to balance the compressed tokens with the performance? 5. The concepts are so confused. Sometimes, the author uses the "modality bridge" and also uses the "projector". I think they are the same module right? 6. The paper mentions using a two-layer MLP as the projector but doesn't explore how different projector architectures might impact performance. 7. The paper suggests that xRAG is more robust than RAG when dealing with irrelevant documents but doesn't provide a clear mechanism for how xRAG achieves this robustness. 8. How will the hyper parameter alpha influence the final results? 9. How to ensure the compressing process filters the irrelevant information rather than the crucial ones? 10. Will sensitive information like numbers or times be kept by the projector after compressing? Technical Quality: 2 Clarity: 3 Questions for Authors: See weakness. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Appreciate your time and effort in reviewing our paper. We would like to address your concerns point by point: - **Added complexity to the system** In xRAG, we have emphasized throughout our paper that our key design principle is to add minimal complexity to the existing RAG system. In Section 3.4, we explicitly state that our approach **does not require an additional pretrained modality encoder** but instead utilizes existing document embeddings from the retrieval datastore. This design choice ensures **zero** additional space complexity, enabling practical scalability in real-world RAG systems handling millions of documents. To make our method plug-and-play, we freeze the LLM and retriever, and the only trainable component is the projector. - **Will the Bias in Projector Tuning Cause Hallucinations?** In Section 6.1, we introduce a metric called Resilience Rate, which measures how retrieval (RAG or xRAG) affects LLM performance and prevents hallucinations. Empirical results show that our method is more robust and produces fewer hallucinations than vanilla RAG. - **Dependence on the Quality of Dense Document Embeddings** It is widely acknowledged that the performance of the modality encoder heavily impacts downstream performance, and xRAG is no exception. However, xRAG is not limited to only strong embedding models. In Section 6.2, we demonstrate the effectiveness of xRAG across various embedding models, showing that even with a four-year-old DPR model, xRAG still yields over 8% improvement. - **The compression method, while efficient, might result in the loss of information present in the original documents. Do we need such extreme compression since most of the queries are not too long? Or how to balance the compressed tokens with the performance?** According to Claude Shannon's information theory, one classic estimate of the amount of information per English word is 11.82 bits per word [1]. This means, in theory, we have enough raw state to encode more tokens with a single embedding without loss of information. In that sense, we can be more efficient when dealing with real-world multi-round, multi-document RAG settings, where prompt length could be up to thousands of tokens. More importantly, in Section 5.2, we have demonstrated that the current xRAG could bring considerable efficiency improvement even on a high-end server GPU. As for balancing compression and performance, we discuss in Appendix G that xRAG can be combined with a multi-vector or Matryoshka embedding system where the compression ratio is configurable. We believe this is an exciting direction for xRAG. - **The concepts are so confused. Sometimes, the author uses the "modality bridge" and also uses the "projector". I think they are the same module right?** They refer to the same module that connects two modalities. - **The paper mentions using a two-layer MLP as the projector but doesn't explore how different projector architectures might impact performance.** As discussed in Section 3.4, the key design principle is simplicity. Therefore, we chose the simplest and most commonly used projector: a two-layer MLPs. Exploration of more advanced architectures is left for future work. - **The paper suggests that xRAG is more robust than RAG when dealing with irrelevant documents but doesn't provide a clear mechanism for how xRAG achieves this robustness.** One reason for xRAG's robustness is that it is trained with retrieved context in the second training phase, which improves the LLM's resilience to noisy retrievals. This phenomenon is also observed in [2][3]. Additionally, xRAG can avoid word-by-word repetition seen in RAG when dealing with noisy retrieval results. - **How will the hyper parameter alpha influence the final results?** We tested several values for the hyperparameter alpha ({0.1, 0.5, 1.0, 2.0, 3.0}) and found that 2.0 gave the best results on the validation set. No significant differences were observed when alpha ≥1. - **How to ensure the compressing process filters the irrelevant information rather than the crucial ones?** Our paper, like previous context compression papers, does not claim to achieve lossless compression. It is a balance between efficiency and efficacy. We do not filter out any information; instead, we aim to recover the original information with a single embedding vector. Our novel training paradigm, from paraphrase pretraining to context-aware instruction tuning, is designed to achieve this goal. - **Will sensitive information like numbers or times be kept by the projector after compressing?** xRAG has the ability to retain sensitive information such as numbers and times. As shown in Figure 9 of our paper, xRAG successfully identifies important numbers and answers questions correctly. Thank you again for your review. We hope that our response alleviates some of your concerns and that you might consider raising the score of our paper. Best regards, Authors [1] Claude Elwood Shannon. “Prediction and Entropy of Printed English.” (1951). [2] Lin, Xi Victoria, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke S. Zettlemoyer and Scott Yih. “RA-DIT: Retrieval-Augmented Dual Instruction Tuning.” *ArXiv* abs/2310.01352 (2023): n. pag. [3] Luo, Hongyin, Yung-Sung Chuang, Yuan Gong, Tianhua Zhang, Yoon Kim, Xixin Wu, Danny Fox, Helen M. Meng and James R. Glass. “SAIL: Search-Augmented Instruction Learning.” *Conference on Empirical Methods in Natural Language Processing* (2023). --- Rebuttal Comment 1.1: Comment: Thanks to the author for the responses and some of my concerns have been solved. The necessity of compressing the context into one token remains questionable and I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for taking the time to engage in further discussion and for acknowledging that some of your concerns have been addressed. We would like to clarify the rationale behind compressing the context into a single token. As demonstrated in our paper, our approach **not only achieves a higher compression rate but also results in better accuracy compared to existing context compression methods.** We believe that improving both performance and efficiency is crucial in advancing the state-of-the-art. Could you please elaborate on why a context compression method with better performance and efficiency is not preferred? We appreciate your insights and look forward to your feedback. Sincerely, Authors
Summary: This paper proposes xRAG, a context compression method designed specifically for retrieval-augmented generation. xRAG redefines the use of document embeddings in dense retrieval by integrating them as features from the retrieval modality. It achieves an extreme compression rate to only one token. The authors perform experiments to demonstrate the effectiveness of the proposed method. Strengths: 1. Using modality fusion bridge to connect the embeddings from dense retrieval and LLMs for retrieval-augmented generation. 2. Extremely compress the input tokens from RAG to only one token. 3. Comparable performance with lower FLOPs. Weaknesses: 1. The proposed method relies heavily on the performance of dense retrieval models. The dense retrieval models have poor generalization for out-of-domain [1], which does not match the general ability of LLMs, but the experiment in this paper is only conducted on wiki-based knowledge bases of Q&A tasks. 2. The specific details of selection and training method of dense retrieval model in xRAG are needed. 3. How xRAG performs on long-context RAG? Can only one token represents the semantics of the retrieved long-documents? [1] Back to Basics: A Simple Recipe for Improving Out-of-Domain Retrieval in Dense Encoders Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How xRAG performs on long-context RAG? Can only one token represents the semantics of the retrieved long-documents? 2. How xRAG selects or trains the dense retrieval model? 3 How xRAG performs on the domain that shifts from the training domain of dense retrieval model? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Authors have discussed the limitations of this paper in Section G of Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate your time and effort in reviewing our paper. Here, we would like to address your concerns point by point: - **Out-of-Domain Generalization** Thank you for bringing up this issue. If we understand your concerns correctly, they can be divided into two parts: 1. The Out-of-Domain Generalization of Dense Retrieval Models: We acknowledge that dense retrieval models face challenges with cross-domain generalization, yet they remain the de facto approach for modern RAG systems [1][2][3]. Importantly, dense retrieval models (such as DRAGON[4] and ColBERT[5]) have demonstrated stronger generalization compared to their sparse counterparts (such as BM25), as evidenced by benchmarks like BEIR[6] and MTEB[7]. These models are all compatible with xRAG. 2. The Out-of-Domain Generalization of xRAG: The ultimate goal of xRAG is to create compatible representations between dense embeddings and their textual form within the LLM representation space using a modality fusion approach. We believe a well-trained projector can transform any text into the embedding space. To verify this, we tested our xRAG model (paraphrase pre-trained in the Wikipedia) in the biomedical domain. Specifically, we tested on PubMedQA[8], BioASQ-Y/N[9], and ChemProt[10] using PubMed[11] as our retrieval corpus to evaluate the cross-domain generalization ability of xRAG. By comparing the first two rows in the table below, we observed that simply adding one document token, xRAG gives considerable improvement over the Mistral-7b baseline, demonstrating the cross-domain generalization of xRAG. Moreover, xRAG can easily adapt to other domains because the first stage of training pipeline: paraphrase pretraining, is a self-supervised process that does not require handcrafted labels. We sample documents from PubMed and pre-train xRAG on it, which further improve the performance as shown in the last row. | Model | PubMedQA | BioASQ-Y/N | ChemProt | | --- | --- | --- | --- | | Mistral-7b | 43.6 | 74.5 | 66.7 | | xRAG-7b | 49.2 | 82.1 | 71.8 | | xRAG-7b (PubMed) | 51.1 | 83.2 | 72.8 | - **Details about Training and Selection of Dense Models:** As stated in Section 4.2 of our paper, we selected SFR[12] as our dense embedding model, which, at the time of writing, held the leading position on the MTEB leaderboard. We did not train this model ourselves, allowing us to reuse the offline constructed embeddings and adding ZERO overhead to the existing RAG system, the core of xRAG as detailed in Section 3.4. - **Long-Context RAG:** RAG is commonly considered a technique to alleviate the need for LLMs to read long contexts by chunking documents into pieces, which is how modern RAG systems operate. Therefore, technically speaking, long-context and RAG represent two different generation paradigms for LLMs. If your concern regarding xRAG involves multiple document chunks, please refer to our general response. Thank you again for your review. We hope that our response addresses your concerns and that you might consider raising the score of our paper. Best regards, Authors References: [1] Lin, Xi Victoria, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke S. Zettlemoyer and Scott Yih. “RA-DIT: Retrieval-Augmented Dual Instruction Tuning.” *ArXiv* abs/2310.01352 (2023): n. pag. [2] Yu, Yue, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi and Bryan Catanzaro. “RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs.” (2024). [3] Shao, Rulin, Jacqueline He, Akari Asai, Weijia Shi, Tim Dettmers, Sewon Min, Luke S. Zettlemoyer and Pang Wei Koh. “Scaling Retrieval-Based Language Models with a Trillion-Token Datastore.” (2024). [4] Lin, Sheng-Chieh, Akari Asai, Minghan Li, Barlas Oğuz, Jimmy J. Lin, Yashar Mehdad, Wen-tau Yih and Xilun Chen. “How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval.” *ArXiv* abs/2302.07452 (2023): n. pag. [5] Khattab, O. and Matei A. Zaharia. “ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT.” *Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval* (2020): n. pag. [6] Thakur, Nandan, Nils Reimers, Andreas Ruckl'e, Abhishek Srivastava and Iryna Gurevych. “BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models.” *ArXiv* abs/2104.08663 (2021): n. pag. [7] Muennighoff, Niklas, Nouamane Tazi, Loic Magne and Nils Reimers. “MTEB: Massive Text Embedding Benchmark.” *Conference of the European Chapter of the Association for Computational Linguistics* (2022). [8] Jin, Qiao, Bhuwan Dhingra, Zhengping Liu, William W. Cohen and Xinghua Lu. “PubMedQA: A Dataset for Biomedical Research Question Answering.” *Conference on Empirical Methods in Natural Language Processing* (2019). [9] Yang, Zi, Yue Zhou and Eric Nyberg. “Learning to Answer Biomedical Questions: OAQA at BioASQ 4B.” (2016). [10] Peng, Yifan, Shankai Yan and Zhiyong Lu. “Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets.” *BioNLP@ACL* (2019). [11] https://huggingface.co/datasets/ncbi/pubmed [12] https://blog.salesforceairesearch.com/sfr-embedded-mistral/
Summary: This paper presents xRAG, a context compression method for Retrieval-Augmented Generation (RAG). Their key idea is to treat document embeddings from dense retrieval as features from a retrieval modality, which allows compressing retrieved documents into a single token. Experiments show that their method can achieve extreme compression while maintaining performance comparable with traditional RAG. Strengths: - The proposed xRAG method treats document embedding as a modality, and shows strong potential for efficient RAG systems. Specifically, the proposed method uses lightweight training strategy that consists of paraphrase pretraining and context-aware instruction tuning. - xRAG demonstrates strong empirical results: it can achieve similar performance with traditional RAG methods on various QA datasets, and has the best performance among compression-based RAG methods. However, it still has limitations in tasks that require multi-step reasoning. - The paper provides in-depth analysis of the method's behavior, including studies on robustness, effectiveness of different components, and failure cases. - The writing is very clear and easy to understand. Weaknesses: - RAG is proposed to reduce the hallucination of language model generation, by letting them refer to the original context. It is still unclear why xRAG that compresses a document into a single token could still leave the factual knowledge intact, and the authors should discuss more on this. - As the limitation paragraph mentions, the paper does not consider the case of retrieving multiple documents. Though the proposed method works for one-document case, it may not generalize to multiple-document case and may affect the training of the compression projector. Technical Quality: 3 Clarity: 4 Questions for Authors: Please refer to the previous section. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the paper addresses the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate your time and effort in reviewing our paper. Here, we would like to address your concerns point by point: - **Question about why xRAG could compress a document chunk into a single token** Thank you for this insightful question. We believe this is the core of xRAG, and we want to explain it from two perspectives. First, it is surprising how much information a single embedding can contain. As demonstrated in [1], a single vector of shape 768 can recover 32 tokens with over a 90% exact match. In our experiments, we use an embedding of shape 4096 to cover a document chunk with an average of 180 tokens. Moreover, according to Claude Shannon's information theory, one classic estimate of the amount of information per English word is 11.82 bits per word [2]. This suggests that, in theory, we still have enough raw state to encode more tokens with a single embedding. Second, the overall performance is a joint effect from both Boost Rate and Resilience Rate (as defined in Section 6.1 of our paper) and we have to acknowledge that xRAG does not yet perform comparably with RAG in terms of Boost Rate. Improving the Boost Rate of xRAG while maintaining a high level of Resilience Rate—balancing external and internal knowledge—is a primary focus of our future work. - **About retrieving multiple documents** Please refer to our general response. Thank you again for your review. We hope that our response alleviates some of your concerns and that you might consider raising the score of our paper. Best regards, Authors [1] Morris, John X., Volodymyr Kuleshov, Vitaly Shmatikov and Alexander M. Rush. “Text Embeddings Reveal (Almost) As Much As Text.” *Conference on Empirical Methods in Natural Language Processing* (2023). [2] Claude Elwood Shannon. “Prediction and Entropy of Printed English.” (1951).
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate your time and effort in reviewing our paper. We would like to address the concerns regarding the multiple document expansion of xRAG. Our work represents a pioneering effort in efficient RAG with modality fusion, a research direction that has been acknowledged as both interesting and promising by the reviewers. In our current experimental setup, we have focused on the top-1 document in both training and evaluation to facilitate rapid iteration of data-centric experiments and for efficiency reasons. We view the expansion to multiple documents as an incremental improvement (1 to N) of the existing framework, which, while valuable, is less fundamental compared to our core innovation (0 to 1). Adapting xRAG to a top-k documents setting presents several avenues for exploration, including: 1. The type of modality projector (e.g., MLP, Q-Former, or Perceiver). 2. Data selection and mixing strategies (e.g. data cluster with longer context) 3. Optimization for document relationship modeling and more. To demonstrate xRAG's adaptability to multi-document scenarios, we have implemented a straightforward approach focusing primarily on the data perspective. Specifically, we upsampled summarization data, which typically includes longer documents, and divided these into chunks during our context-aware instruction tuning phase. We maintained other configurations as in the current xRAG implementation. The results of this approach are presented below. While this naive implementation may not represent the optimal configuration for a multi-document setting in xRAG, as it primarily stems from the data aspect, it effectively showcases our framework's flexibility and extensibility to handle multiple documents. | | NQ | TriviaQA | WebQ | HotpotQA | | --- | :---: | :---: | :---: | :---: | | top1 | 39.1 | 65.7 | 39.4 | 34.0 | | top3 | 41.4 | 67.3 | 41.1 | 35.3 | Thank you once again for your valuable feedback. Best regards, Authors
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Block Transformer: Global-to-Local Language Modeling for Fast Inference
Accept (poster)
Summary: This paper proposes a Block Transformer architecture which adopts hierarchical global-to-local modeling to mitigate the computational cost and KV cache memory of Self Attention. Block Transformer isolate the global modeling with three blocks: Embedder, Block Decoder, and Token Decoder. Embedder encodes block information for auto-regressive modeling in Block Decoder. The aggregated information is plugged into Token Decoder for final token-level decoding. Block Transformer shows 10-20x inference improvement under the similar perplexity than traditional global Transformer. This paper gives a detailed ablation about the architecture design of Block Transformer. Besides, the global modeling capability and uptraining strategy is also discussed. Strengths: 1. The paper is well organized and the experiments are solid and comprehensive. 2. Block Transformer trades inference efficiency with model parameters, which is a bold innovation. 3. Block Transformer achieves low-cost inference compared with standard Transformer. 4. The architecture analysis and ablation studies show the effectiveness of Block Transformer design. Weaknesses: 1. Block Transformer uses much bigger model size to compensate for the performance loss. There are countless problem for that, including training partitioning and inference infrastructure. 2. The actual inference efficiency comparison is doubtful. For example, in Table 2, The Decode throughput of Block Transformer is much bigger than standard Transformer under same model size. Since Block Transformer only saves Attention computation and the FFN computation stays the same, I'm confused that why attention computation occupies almost most of the overall computation. 3. The long-sequence modeling capability is not evaluated. Since Block Transformer squeezes the context representation, it is questionable that if Block Transformer can retrieve the global context information. I think some common long-sequence experiments will help, e.g., Needle-in-a-Haystack. 4. The scaling property is not discussed good enough. For example, in Figure 2, there are some results in different model sizes. However, there is not a "scaling law" for Block Transformer. Besides, the scaling does not look promising with my human eye in Figure 2. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Concerns in the Weaknesses part. 2. I'm curious about the inference experiment setting. Are FlashAttention and FlashDecoding techniques used for vanilla Transformer or Block Transformer? I believe it is already a necessary part in 2024 year. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the comments, and acknowledging the innovation of trading inference efficiency and parameters, the efficiency of Block Transformer, solid experiments, and good paper organization. We address weaknesses below. . **W1. Bigger model size** Despite bigger model size, Block Transformers achieve **higher throughput** and **lower total memory usage** during inference. For training, it is possible to **uptrain existing vanilla models** with ~10% of training steps, and training costs can be **cheaper under constrained inference budget.** **Inference case** - **Total memory usage**: Block Transformers can achieve significantly higher batch sizes under same hardware, as the KV cache **memory usage per sample of Block Transformers is 3—6x times lower**, despite the larger size (`Table 2`). Note that the KV cache of a single sample can be as large as the parameters of the entire model (`Table 2`). - **Throughput**: our main results show significant gains in throughput over vanilla models *under same hardware constraints*. - These advantages extend into the multi-GPU inference scenario. Tensor parallelism can further reduce per-GPU model parameter memory and leaves more room for more KV cache, which is advantageous for the Block Transformer. **Training case** - IsoFLOP analysis in `Section 3.6` shows that Block Transformers can achieve better performance **and** significantly higher throughput using the **same training FLOPs** as vanilla models when constrained by inference budget (we discuss further in W4). - `Section 3.7` shows that it is possible to **uptrain existing vanilla models** into Block Transformers, significantly reducing development costs. Uptrained models approach the performance of model pre-trained from scratch, using just `10%` of training steps (`Figure 5a` and `Appendix M`). . **W2. Doubts on inference efficiency comparison** We point out that **Block Transformer does not “only save attention computation”**, but have a wide range of advantages stemming from our architecture design. Please refer to `Section 2` and `Appendix D`. We also provide a **visual recap of advantages in `Figure 1` of the attached PDF.** We re-iterate four key advantage of our architecture: 1. `Attention + FFN` Token decoder does not need to prefill prompt tokens. 2. `Attention` KV cache IO at the token decoder is reduced by $L/L_B=2048/4=256\times$. 3. `Attention + FFN` Block decoder operates at the block level, reducing overall costs by $L_B=4$. KV cache IO cost is reduced quadratically. 4. `Attention + FFN` Overall KV cache storage is reduced by $L_B=4\times$ in the block decoder and nearly eliminated in the token decoder. This enables higher batch sizes and thus higher compute utilization. We also empirically pinpoint these benefits in **new detailed measurements in `Table 1`** of the `attached PDF`. These show that Block Transformer speeds up all segments of computation relative to a loss-comparable vanilla model: {attention, FFN} operations at {lower, upper} layers during {prefill, decoding} stages under {prefill, decode}-heavy settings. . **W3. No evaluation of long-sequence modeling capability** `Figure 4b` of `Section 3.5` demonstrates the long-context modeling capabilities of Block Transformers. Our models achieve lower loss by utilizing more context up to 2K tokens on the PG19 test dataset. This is a standard benchmark used to evaluate long-context modeling capability [1, 2]. We also extend this to 8K context in `Figure 2 of attached PDF`, with new models pretrained on 8K context for ~30B tokens. We show that the Block Transformer (1.2B) achieves lower loss at all token positions relative to a comparable vanilla baseline (300M), despite achieving significantly higher throughput (`Appendix Figure 10`). . **W4. Scaling property** In Appendix A, we discuss that rigorous Chinchilla-style scaling study is infeasible as it requires repeated training runs with appropriate LR schedules for each FLOP budget [3]. However, expanding on W1, we do discuss in `Section 3.6` why our isoFLOP analysis (which is in the overtraining regime for the vanilla model) is relevant under current trends—where small models are significantly overtrained to maximize performance at a given inference budget [4]. I.e., to achieve maximize performance using plentiful training FLOPs under a tight inference budget with vanilla models, we must resort to training a small model into a highly suboptimal training regime. Since Block Transformers have significantly smaller inference costs, we can use larger models which can achieve **higher better performance given the same training FLOPs**, **under inference budget constraints**. This is what is shown in `Figure 4c`. . **Q2. FlashAttention and FlashDecoding** **Existing experiments**: all models in our paper are based on GPTNeoX using MHA on Huggingface and used eager attention implementation for inference, due to limited support for FlashAttention2 during the time of submission*. **FlashDecoding [5]**: New measurements using FlashAttention2’s FlashDecoding kernel* show a **similar pareto frontier as our existing results** in `Figure 3 of attached PDF`. Please compare with `Figure 2 of main paper`. We find that FlashAttention2 achieves `8-41%` throughput gains in vanilla models and `+8% to +31%` in block models, in the prefill-heavy setting. Gains are diminished in larger models. We observe `+16% to -37%` speedup across models in the decode-heavy setting, without a coherent trend. . **References** [1] Zhang, Zhenyu, et al. "H2o: Heavy-hitter oracle for efficient generative inference of large language models." [2] Xiao, Guangxuan, et al. "Efficient streaming language models with attention sinks.” [3] Hoffmann, Jordan, et al. "Training compute-optimal large language models.” [4] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models.” --- Rebuttal 2: Title: Response to Authors' Rebuttal Comment: I appreciate authors' thorough and patient response. I have a better understanding of the experiment setting. However, I still have some of my previous concerns: 1. Measuring the perplexity on PG19 to evaluate the long-sequence modeling capability is still not a good experiment. Since some subsequent works show that H2O and Attention Sink are not good at some long context tasks, including Needle-in-a-Haystack, LongBench, and ZeroScrolls, I don't agree it's a **standard benchmark**. I believe it is a novel and insightful work. But there is still space for improvement. In a nutshell, the paper will look much stronger if the experiment is on a "**modern**" setting, including the latest evaluation settings and kernel techniques. I will keep my initial score. --- Rebuttal Comment 2.1: Comment: Thank you for acknowledging our comments. We are glad that we have resolved many of your concerns including model size, inference efficiency, and scaling properties. While we have shown positive long context perplexity results up to 8K, we will also make sure to perform additional evaluation on more recent long context tasks. Regarding modern implementation, we would like to re-iterate that **Vanilla and Block Transformers both benefit from modern FlashDecoding kernels** (`Figure 3 of attached PDF`), maintaining our pareto results. Detailed measurements in `Table 2 of attached PDF` suggests that Block Transformers will still be faster overall, when orthogonally applying modern attention schemes such as MQA/GQA. We will also include these analyses and more in our final paper. --- Rebuttal Comment 2.2: Title: Experimental results on the Needle-in-a-Haystack task Comment: We appreciate again the acknowledgement for the novelty and insightful contributions of our work. To the best of our knowledge, recent long-context benchmarks like Needle-in-a-Haystack (NIAH), LongBench, and ZeroScrolls typically evaluate instruction-tuned models, as opposed to our pre-trained base models. Nevertheless, we are pleased to share **additional results on the Needle-in-a-Haystack task**. We found that **Block Transformers perform equally or stronger than loss-equivalent vanilla models**, consistently across **(1) needle locations**, **(2) model scales** and **(3) prompt variants**. . ## Experimental settings. Following prior work [1], we construct the context by first sampling 2K-length snippets from concatenated essays written by Paul Graham as the “haystack”, and then inserting a “needle” containing key information in a random location. Following [1], we use this needle format: `The special magic {city} number is: {number}`. - `{city}` is a randomly chosen city name - `{number}` is a random 7-digit number. We then append a prompt that queries to model to retrieve the 7-digit number. We consider two prompt formats: **1. Gemini prompt** Format: `<context>\n{context}\n</context>\n\nWhat is the special magic {city} number?\n\nHere is the magic number from the context:` We mostly followed the NIAH prompt used in Gemini [1], but we excluded the “Don’t give information outside the document or repeat your findings” part, as our models are not instruction-tuned. **2. Verbatim prompt** Format: `<context>\n{context}\n</context>\n\n{question}\n\nThe special magic {city} number is:`. Here, we used the exact same format as that in the needle to query the model. We measured the accuracy by generating 20 new tokens, and considering a prediction correct if the generated text contains the 7-digit number. . ## Experimental results. Note that depth refers to the relative of the location of the needle within the haystack, in percentages. **Gemini prompt** | Depth | 0 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | Mean | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Vanilla 19M | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.20% | 0.80% | 6.40% | 0.67% | | Vanilla 85M | 21.00% | 16.40% | 21.80% | 27.40% | 36.60% | 28.00% | 26.80% | 40.20% | 41.80% | 37.60% | 22.80% | 29.13% | | Vanilla 300M | 46.20% | 69.00% | 72.80% | 78.60% | 76.40% | 70.40% | 71.80% | 74.80% | 73.80% | 78.40% | 66.20% | 70.76% | | Block 85M | 5.60% | 2.40% | 0.80% | 0.80% | 0.20% | 1.00% | 0.80% | 1.00% | 2.60% | 1.80% | 6.40% | 2.13% | | Block 300M | 23.40% | 52.60% | 52.60% | 46.60% | 46.00% | 49.20% | 58.40% | 70.40% | 64.00% | 53.60% | 18.40% | 48.65% | | Block 800M | 35.80% | 74.00% | 76.40% | 78.40% | 69.80% | 77.40% | 76.40% | 79.00% | 75.20% | 72.80% | 53.60% | 69.89% | | Block 1.2B | 57.20% | 86.60% | 88.80% | 85.60% | 80.40% | 85.20% | 90.40% | 89.20% | 91.00% | 90.40% | 78.80% | 83.96% | **Verbatim prompt** | Depth | 0 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | Mean | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Vanilla 19M | 8.20% | 1.40% | 3.00% | 6.80% | 7.80% | 12.60% | 45.40% | 65.80% | 63.40% | 84.60% | 99.40% | 36.22% | | Vanilla 85M | 95.60% | 99.40% | 99.00% | 99.40% | 99.20% | 99.20% | 99.00% | 99.60% | 99.60% | 99.00% | 95.60% | 98.60% | | Vanilla 300M | 99.60% | 100.00% | 100.00% | 99.80% | 100.00% | 100.00% | 99.80% | 100.00% | 100.00% | 100.00% | 99.80% | 99.91% | | Block 85M | 96.20% | 97.60% | 96.20% | 96.60% | 98.40% | 98.00% | 97.20% | 98.80% | 99.00% | 99.40% | 96.20% | 97.60% | | Block 300M | 90.20% | 99.40% | 99.60% | 99.20% | 98.60% | 99.60% | 99.60% | 99.80% | 99.80% | 99.20% | 99.20% | 98.56% | | Block 800M | 95.20% | 99.40% | 98.80% | 98.80% | 98.80% | 98.80% | 99.00% | 99.40% | 99.20% | 97.40% | 99.60% | 98.58% | | Block 1.2B | 92.60% | 98.40% | 99.40% | 98.80% | 99.60% | 99.60% | 98.80% | 99.80% | 99.80% | 99.20% | 98.00% | 98.55% | . These results confirm that the Block Transformer, like the vanilla models, can effectively retrieve global information contained within the 2K context length. With the Gemini prompt, we observed an accuracy trend that was very similar to the perplexity trend of the vanilla vs block models. Near-perfect performance with the Verbatim prompt supports the long-sequence modeling capabilities of our models even when context information is squeeze into a single embedding. We believe this parity between Vanilla and Block Transformers on 2K context length will extend to 8K and beyond. . We would appreciate it if you could reflect our additional results on FlashDecoding (modern implementation) and NIAH evaluation (modern evaluation) in your final score, as we believe these have adequately addressed your concerns. . [1] Gemini Team, Google. “Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.” --- Rebuttal 3: Comment: I'm pleased to see that Block Transformer can effectively retrieve global information contained within the 2K context length, that is a strong indicator for long-context capability. I will increase my score to 6.
Summary: The authors introduced the Block Transformer architecture to address the self-attention bottleneck. This is achieved by grouping input tokens into fixed-size blocks and applying self-attention at a corser level throughout the model. At the output layer, a token decoder predicts individual tokens from the block. The authors found that this hierarchical global-to-local modeling approach results in 10 to 20 times faster inference compared to vanilla transformers with similar perplexity. Strengths: - The topic of improving the efficiency of LLMs and making them more affordable is crucial and timely. Furthermore, as sequences scale, the self-attention bottleneck increases. - The authors conducted extensive experiments and ablations with a modern setup. The evaluation includes perplexities and zero-shot downstream tasks. The models were trained for a significant number of tokens (300B, which is more than an epoch on the Pile), making the result more trustworthy. - The paper is well-written and easy to follow. Weaknesses: - The concept appears similar to that of the Funnel Transformer [1], with the main exception that the aggregation happens only once at the token level. - The Pareto front of the decoding throughput is only improved with batch sizes greater than 32, which may not often be the case. - The higher inference throughput comes at the cost of more training compute and memory. The proposed methods perform worse than the vanilla model at an equivalent size. [1] Dai, Zihang, et al. "Funnel-transformer: Filtering out sequential redundancy for efficient language processing." Advances in neural information processing systems 33 (2020): 4271-4282. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is the baseline using FlashAttention2 [2] and MQA/GQA [3] ? These two modifications have become standard and significantly reduce the bottleneck of the attention. - Could you explain the differences between your methods and the Funnel Transformer? - Given the Funnel Transformer and the many other sparse attention, do you maintain your claim that you "are the first to recognize the central role and inference-time benefits of both global and local modeling in autoregressive transformers, particularly the significance of local modules"? [2] Dao, Tri. "Flashattention-2: Faster attention with better parallelism and work partitioning." arXiv preprint arXiv:2307.08691 (2023). [3] Ainslie, Joshua, et al. "Gqa: Training generalized multi-query transformer models from multi-head checkpoints." arXiv preprint arXiv:2305.13245 (2023). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the comments, and we are encouraged that you pointed out the trustworthiness of our extensive experiments with modern setup, and easy-to-follow writing. We address weaknesses below. **W1. Difference between Block Transformer and Funnel Transformers** **Major difference in key aspect—local attention**: we respectfully disagree that our Block Transformer is similar to existing pooling transformers, including Funnel Transformer [1], Hourglass [2], and CANINE [3]. Our approach is fundamentally different, as it applies **local attention** in the token decoder instead of maintaining global attention with pooling and up-sampling. Pooling transformers achieve speedup by pooling as the depth increases and then upsampling in the upper layers. However, they maintain global attention throughout all layers. In contrast, while global-to-local modeling applies a pooling operation at the token level, a primary feature is the locality of attention in the token decoder (upper layers). The token decoder applies attention only within the local window, leading to significantly higher compute utilization. We will include this discussion in our related works section. Refer to advantages of this local attention (and more) in our visual recap in `Figure 1 of attached PDF` , and details in `Section 2.4` and `Appendix D.2` . We also empirically pinpoint the benefits of the locality aspect (`Table 1 of attached PDF`), accounting for `~90% reduction` in walltime at the upper layers . **W2. Pareto-Frontier of decoding throughput is only improved with batch sizes greater than 32** We respectfully disagree, and would like to emphasize that trends in `Figures 7, 8` show the throughput gains increases as the model size increases, even at smaller batch sizes In fact, **Block 1.2B already surpasses Vanilla 300M** in both prefill-heavy and decode-heavy throughput **at batch size 1**. This is because KV cache saturates GPU memory even at small batch sizes, with larger models. Below, we show the throughput of models above 1B parameter with batch size 1 and 2. The advantage of Block 1.2B against the loss-equivalent Vanilla 300M model widens as batch size increases. We expect that Block 6.4B will perform between Vanilla 1.2B and 2.5B, and find that Block 6.4B is faster than Vanilla 1.2B. The gap also widens as batch size increases. | Batch Size 1 | Vanilla 85M | Block 300M | Vanilla 300M | Block 1.2B | Vanilla 1.2B | Vanilla 2.5B | Block 6.4B | | --- | --- | --- | --- | --- | --- | --- | --- | | Prefill Heavy | 216.26 | 158.34 | 115.24 | 154.37 | 113.05 | 79.63 | 110.64 | | Decode Heavy | 222.22 | 155.38 | 113.23 | 153.44 | 114.35 | 80.54 | 110.46 | | Batch Size 2 | Vanilla 85M | Block 300M | Vanilla 300M | Block 1.2B | Vanilla 1.2B | Vanilla 2.5B | Block 6.4B | | --- | --- | --- | --- | --- | --- | --- | --- | | Prefill Heavy | 422.18 | 316.31 | 224.51 | 290.84 | 203.49 | 152.86 | 211.27 | | Decode Heavy | 426.24 | 306.22 | 221.36 | 290.72 | 213.88 | 155.64 | 221.84 | . **W3. Block Transformer requires larger model size, and costs more training compute and memory** To mitigate model development costs (including training), it is possible to **uptrain existing vanilla models** with just `~10%` of training steps. We also find that training costs can be **cheaper under constrained inference budget.** - Uptraining existing vanilla models into Block Transformers approaches the performance of models pre-trained from scratch, using just 10% of training steps (`Section 3.7`, `Figure 5a`, `Appendix M`). - IsoFLOP analysis under in `Section 3.6` shows that Block Transformer can achieve **better performance** and **significantly higher throughput** using the **same training FLOPs** as vanilla models. Note that this assumes overtraining to fit a fixed budget constraint, in line with recent trends set by models such as Gemma and Llama [4] (`Section 3.6`). . **Q1. Do baselines use FlashAttention2 and MQA/GQA?** **Existing experiments**: all models in our paper are based on GPTNeoX using MHA on Huggingface and used eager attention implementation for inference, due to lack of support for FlashDecoding during the time of submission. **FlashDecoding [5]**: New measurements using FlashAttention2’s FlashDecoding kernel* show a **similar pareto frontier as our existing results** in `Figure 3 of attached PDF`. Please compare with `Figure 2 of main paper`. We find that FlashAttention2 achieves `8-41%` throughput gains in vanilla models and `+8% to +31%` in block models, in the prefill-heavy setting. Gains are diminished in larger models. We observe `+16% to -37%` speedup across models in the decode-heavy setting, without a coherent trend. **GQA**: We expect that both Vanilla and Block Transformers will benefit from GQA, in terms of the attention operations. - Q. what if FFN becomes the deciding factor after applying GQA? - A. new measurements show that Block Transformer is significantly faster than vanilla in **both attention and FFN operations** (`Table 1 in attached PDF`). **Q3. Maintaining our claim** We made this claim because we were the first to identify the locality in the token decoder as the source of significant inference-time benefits, in the context of global-to-local modeling (coarse global attention in lower layers and fine local attention in upper layers). We will revise the sentence to more clearly emphasize our recognition of the benefits of the local module in global-to-local modeling. . **References** [1] Dai, Zihang, et al. "Funnel-transformer: Filtering out sequential redundancy for efficient language processing." [2] Nawrot, Piotr, et al. "Hierarchical transformers are more efficient language models." [3] Clark, Jonathan, et al. “CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation.” [4] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models.” [5] Dao, Tri, et al. "Flash-Decoding for long-context inference” --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. Overall, I am pleased with the answers and clarifications provided, and I have adjusted my score accordingly. **W1.** I appreciate the clarification on the key difference between the proposed architecture and Pooling Transformers that is the locality in the decoding layers. I agree that local attention will significantly reduce computational requirements, particularly for long sequences. **W2.** I am satisfied with the provided numbers and suggest adding the table to the appendix. Figs. 7a and 8a do not show a clear Pareto front improvement at a batch size of 1, in contrast to Figs. 7b and 8b at a batch size of 32. I recommend extending these figures to include larger models to highlight the improvement. Additionally, I suggest revising the statement Lines 746-749: "At a batch size of 1, parameter IO has a much greater impact on throughput compared to KV cache IO, resulting in slightly lower throughput for block model. However, as the model sizes increase beyond a certain point, the increased KV cache memory causes this trend to reverse." Specifying that this trend reverses between 300M and 1.2B parameters may be helpful. **W3.** I am satisfied with the response. **Q1.** Thank you for the additional experiments on FlashAttention and GQA. **Q3.** Based on your response to **Q1**, I now have a better understanding of your claim. I appreciate your commitment to clarifying the sentence. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our comments, and providing additional detailed feedback on our manuscript. We are glad that we could clarify the novelty and contribution of our work, and further show the generality of our results to practical settings and state-of-the-art implementations. We will include the additional analysis and clarifications in our final paper, with further results on more batch sizes and model sizes.
Summary: The paper introduces the Block Transformer architecture, which aims to improve inference speed in autoregressive language models by adopting a hierarchical global-to-local approach. The architecture separates the global context modeling into lower layers and local detailed interactions into upper layers, thus reducing the self-attention bottleneck. The authors demonstrate significant improvements in inference throughput without compromising perplexity. Strengths: 1. The experiments in this paper are extensive, covering a variety of model parameters. 2. The proposed Block Transformer demonstrates improvements in computational efficiency, which is crucial for scaling up to longer sequences. Weaknesses: **Major Weakness** As I understand it (please correct me if I'm wrong), the primary difference between Block Transformer and MEGABYTE [1] is whether the input is a token or a byte. The architecture of Block Transformer is nearly identical to that of MEGABYTE, which significantly limits the novelty and contribution of this work. [1] MEGABYTE: Modeling Million-byte Sequences with Multiscale Transformers Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 1 Limitations: As discussed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the comments, and we are encouraged that you pointed out the improved efficiency of our Block Transformers with extensive experiments. The weakness of our paper is discussed as follows. **W1. Difference between Block Transformer and MEGABYTE.** We have discussed the main differences to previous works related to global-to-local modeling in `Appendix C.1`. **We believe our contributions are novel to Megabyte. Here’s why:** 1. **Firstly, the primary goal of our proposed architecture, which utilizes global-to-local language modeling, is clearly different.** While MEGABYTE (including most hierarchical transformer works) focuses on **efficient pretraining**, we mainly focuses on **efficient inference**. MEGABYTE aimed to reduce training time by minimizing FLOPs through global-to-local modeling. They optimized architectures under a fixed FLOPs budget, leading them to favor a model with a six times larger global module compared to the local model. In contrast, we studied global-to-local modeling with an emphasis on inference throughput in autoregressive LMs. Specifically, we analyzed *throughput* trends based on the block length and parameter allocation ratio (refer to `Figure 15`) , and our findings reveal that increasing the size of the token decoder (local model) is beneficial for improving inference speed. This interpretation is completely overlooked and under-explored in MEGABYTE, which argue that “*Many of the efficiency advantages of the MEGABYTE design could be realized with the global model alone*”, thus use a significant small local model. This results in a remarkable speedup of up to `20x`, contrasting their reported `1.4x improvement`. New detailed measurements in `Table 1` of the `attached PDF` empirically pinpoints the benefits of the locality aspect, accounting for the `~90% reduction` in walltime at the upper layers compared to vanilla models. 1. Second, as you described, Block Transformers use subword inputs, while MEGABYTE uses byte-level inputs. This enables us to employ an uptraining strategy with initialization techniques that fully leverage existing subword-level language models (refer to `Section 3.7` and `Figure 5a`). Specifically, we demonstrate that with only 10-20% of uptraining, we can almost match the performance of the original model. This is a significant contribution to future research, as it enables the conversion of high-performing LLMs into inference-specialized Block Transformers with minimal additional training cost. Additionally, further exploration of initialization methods could lead to even greater performance improvements and a reduction in the optimal parameter size of Block Transformers. We would also appreciate acknowledgement for our novel findings and insights regarding global-to-local language modeling, such as the diverse conclusions drawn from analyzing the relationship between block length and parameter allocation ratio in terms of perplexity and throughput (refer to `Section 3.3` and `Section 3.5`), extensive ablation studies on various components, especially token decoder components (refer to `Section 3.4`), IsoFLOP analysis with the inference speed budgets compared to vanilla transformers (refer to `Section 3.6`), or exploring the information contained in context embeddings, which has never been addressed in prior research (refer to `Appendix P` and `Table 5`). Consequently, we strongly believe that our findings and contributions will provide valuable insights for future research on inference-optimized global-to-local language modeling. --- Rebuttal Comment 1.1: Comment: Given that the experiments are indeed extensive, I will increase my score from 3 to 5. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our extensive experiments which support the autoregressive inference benefits of our architecture, particularly that of local modeling, which was not fully studied or exploited in previous work. Please let us know if you have any further questions or concerns regarding the novelty of our work. We are committed to ensuring our contributions are clearly communicated.
Summary: This paper introduces Block Transformer, which is a new architecture that adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks brought by applying self-attention on the global context. In detail, Block Transformer mainly includes three different components: (1) Embedder, which aggregates each block into an input block embedding; (2) Block decoder, which applies self-attention on the full sequence of blocks (rather than tokens) to model global context; (3) Token decoder, which applies self-attention on the sequence of tokens within each block to model local context and decode individual tokens. Evaluation shows that the Block Transformer architecture demonstrates significant gains in inference throughput compared to vanilla transformers with similar perplexity. Strengths: - The paper explores an important and interesting research direction. - The improvement on inference throughput achieved by Block Transformer is significant. - The paper is generally well-written. Weaknesses: - Block Transformer needs two or three times more parameters than vanilla transformers to achieve similar perplexity. - It is unclear that, after scaling up vanilla transformers to 7B or 13B level, whether Block Transformer can still achieve similar perplexity with two or three times more parameters. - More evaluation is required to demonstrate that Block Transformer can effectively leverage full context. While the paper evaluates the perplexity of token positions within a 2K context window to show that Block Transformer can effectively leverage at least 2K tokens of context, experiments on longer contexts that are no shorter than 8K or 16K is also important to show that Block Transformer can indeed effectively leverage global information. Technical Quality: 3 Clarity: 3 Questions for Authors: None beyond the above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation section of this paper is rather comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the thoughtful feedback. We are encouraged that you found our research direction interesting and throughput improvement significant. The weaknesses of our paper are discussed as follows. . **W1. More parameters are needed to achieve the similar perplexity.** KV cache IO and memory size typically impacts generation speed during the decoding process. In standard transformers, larger parameters necessitate larger KV cache sizes, negatively affecting both speed and memory consumption. Conversely, our Block Transformer mitigates this issue by grouping tokens into blocks and performing local modeling on each block embedding (see memory and IO comparison in `Table 1`). Therefore, despite having two or three times large parameters, the significantly reduced KV cache sizes enable the Block Transformer to demonstrate up to `20 times` faster generation speed, taking into account the total time spent on parameter loading or KV cache loading. Our model also handles much larger batch sizes, making it a more efficient solution for real-world applications. In `Table 1` of the `attached PDF`, we have summarized the actual speed improvements achieved in the attention and FFN operations of both the block and token decoder. Additionally, `Figure 1`, in the `attached PDF`, illustrates which model elements contribute to increased throughput despite having more parameters. It is also important to note that our Block Transformer is not yet a fully optimized architecture, meaning there is still room for improving performance while maintaining its speed advantage. For example, as confirmed by attention scores (refer to `Appendix O`), incorporating multiple, salient block embeddings in token decoder could substantially enhance performance, potentially reducing the required parameter sizes of Block Transformer. Meanwhile, due to the nature of local modeling, slightly increasing the context length in local modules would have minimal impact on the actual generation time. Moreover, our token-level modeling structure allows us to uptrain from a well-pretrained checkpoint (refer to `Figure 5a` and `Appendix M`). Further exploration of initialization methods to effectively leverage pretrained models could potentially reduce the parameter size of the Block Transformer to achieve the same perplexity, while significantly reducing the training time as well. . **W2. Scaling Models up to 7B or 13B parameters.** **Performance scaling**: we acknowledge the important of scaling studies. Unfortunately, due to the significant computational resources required for pretraining from scratch, we were unable to verify our findings at larger scale such as 7B or 13B. However, as a proof of concept, we have successfully demonstrated that our proposed modeling approach can achieve similar perplexity across six different scales up to ~1B parameters. Based on our extensive studies, we believe that scaling Block Transformers beyond 7B parameters will still yield compelling perplexity. **Inference throughput scaling**: independent of perplexity, we compared the throughput of the Vanilla and Block Transformers scaled up to 7B parameters using random initialized weights. We present the maximum throughput (1k tokens/sec) of models not included in `Table 2`, in prefill- and decode-heavy scenarios. We find that our **Block Transformer with 6.9B parameters is still faster than that of a 160M vanilla model**. Assuming that Block Transformer 6.9B performs between that of Vanilla 1.4B and 2.8B, we can still expect *at least* a `6x` speed-up (by comparing with vanilla `1.4B`). | | Vanilla 1.4B | Vanilla 2.8B | Vanilla 6.9B | Block 2.8B | Block 6.9B | | --- | --- | --- | --- | --- | --- | | Prefill Heavy | 0.63 | 0.35 | 0.20 | 7.15 | 4.00 | | Decode Heavy | 1.19 | 0.66 | 0.39 | 13.61 | 7.41 | . **W3. Lack of experiments on longer contexts like 8K or 16K.** To further support the effectiveness of our proposed block language modeling in capturing full context, we conducted experiments with an 8K context length (please refer to `Figure 1` in the `attached PDF`). However, due to limited computational resources, we pretrained only a 70M parameter vanilla model and a 170M parameter Block model. Following prior work [1,2,3] that used token position-wise perplexity on the PG19 dataset to demonstrate the utilization of global information in long contexts, we evaluated our Block Transformer in the same manner with 8K context length (refer to `Figure 4b` of the main paper for 2K context window). Even with a extended 8K context window, our models effectively utilized the full context, showing a decreasing loss trend as token position increased, similar to the vanilla model. Besides, consistent with `Table 2` in the main paper, the 170M block model outperformed the 70M vanilla model in terms of perplexity. It is also worth noting the robustness of our proposed block language modeling in leveraging full context, regardless of the block length. As shown in `Figure 4b` of the main paper, even when varying the block length from 1 to 8, the models exhibited the same loss slope with respect to token positions. This indicates that our block language modeling can capture the full context robustly, regardless of the degree of compression applied to context representations. . **Reference** [1] Yu, Lili, et al. "Megabyte: Predicting million-byte sequences with multiscale transformers." [2] Xiao, Guangxuan, et al. "Efficient streaming language models with attention sinks." [3] Zhang, Zhenyu, et al. "H2o: Heavy-hitter oracle for efficient generative inference of large language models." --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response with clarifications and additional experiments. I believe my concerns of W1 and W3 are mostly addressed, however I still think it is critical to scale models up to at least 7B level for more solid evaluation. So I will be keeping my score the same. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our comments. We are glad that your concerns regarding parameter requirements and long context capabilities have been addressed. We acknowledge the value of 7B parameter experiments, but they were infeasible within our scope. Based on the consistent and significant improvement in throughput from 33M to 1.4B parameters, we believe our work is a solid proof-of-concept for hierarchical global-to-local modeling, demonstrating its significant real-world benefits in subword-level autoregressive inference. This can serve as the foundation for future work, including scaling studies and advanced uptraining schemes (Appendix A), and enable novel research directions which exploit the hierarchical structure, e.g., adaptive computation by dynamically allocating block lengths based on token difficulty.
Rebuttal 1: Rebuttal: We extend our gratitude to all the reviewers for providing comprehensive and thoughtful feedback on our manuscript. We appreciate your valuable insights into the strengths and areas for improvement of our work . # Core Contributions of Our Work - **Novelty of approach**: the Block Transformer architecture adopts global-to-local modeling to mitigate key bottlenecks in autoregressive inference which stem from attention. - **Difference with pooling transformers**: [1, 2, 3]: our approach incorporates aggregation *and* locality, whereas pooling transformers only use aggregation. The locality of our token decoder is crucial, accounting for `90%` walltime reduction in upper layers, compared to global attention, used in pooling transformers. - **Difference with existing byte-level global-to-local models** [4, 5]: Contrary to these works, we find that it is optimal for (a) throughput *and* (b) performance to allocate significant capacity to the token decoder (half of layers), whereas prior work recognize the local model to be just “small” and analogous to a classifier/de-embedder. Our approach achieves `1.5x` throughput compared to MEGABYTE [4], reproduced as a subword level LM. - **Model efficiency**: our Block Transformers achieve `10—20x gains` in throughput compared to comparable vanilla models on identical hardware. While this comes at the cost of using more parameters, total memory usage is lower than vanilla models, owing to `3--6x` hardware-agnostic reduction in KV cache per sample. - **Novel architectural contributions**: we propose and compare 3 variants of the embedder and token decoder architecture components. Notably, we design our prefix-based token decoder to exploit the high compute utilization of the token decoder. Loss is reduced by `>0.05` with minimal overhead compared to the local model used in previous work [4]. - **Long-context modeling abilities**: despite limiting global attention to the bottom half of layers, our model is able to utilize contexts up to 2K tokens. We extend this to 8K tokens in `Figure 2 of the attached PDF` and find that our model outperforms vanilla at all token positions. This further supports the viability of a large local component (token decoder). - **Scalability**: we find that our results scale to models with up to 1.4B parameters. We compare with baseline vanilla models that achieve equivalent loss and performance on various zero-shot tasks. - **Uptraining**: we can uptrain vanilla transformers into Block Transformers, using `10% pretraining steps` to approach the performance of training from scratch. This is because we utilize standard transformer components from subword transformers, in contrast to [4, 5]. This significantly lowers development costs and burden to adopt our architecture. . # **Summary of Strengths Cited by Reviewers** - **Impact**: we appreciate reviewers `Ht4c`, `unsu`, `769s` for noting the importance of our research direction, solving the crucial and timely challenge of self-attention bottlenecks in long sequence modeling. - We thank reviewer `BaSr` for acknowledging the *bold innovation* of trading parameters for throughput, which enabled our significant throughput gains of `10—20x` under same hardware. - **Efficiency**: We thank all reviewers for noting our significant improvements in inference efficiency. - **Experiments**: reviewers `unsu`, `769s`, `BaSr` acknowledged that our experiments are extensive and solid, our modern step (`769s` ) and architecture analysis and ablations (`BaSr`). - **Writing**: reviewers `Ht4c`, `769s`, `BaSr` acknowledged that our paper is well-organized and easy to follow. . # Additional Material in PDF - **Figure 1**: **visual recap of the advantages** of coarse global and fine local processing exploited by Block Transformer. - **Table 1**: detailed measurements on walltime reductions at each inference stage and model component, corresponding to advantages from `Figure 1`. - **Figure 2**: **Block Transformers can leverage 8K context length**, outperforming its vanilla counterpart at all token positions, while achieving `7—8x` throughput (`Table 2` in the main paper). - **Figure 3**: Pareto analysis of throughput to language modeling performance using **optimized FlashDecoding [6] kernels. We observe trends identical to our main results** (`Table 2` in the main paper), further supporting the generality of our results. . ### References [1] Dai, Zihang, et al. "Funnel-transformer: Filtering out sequential redundancy for efficient language processing." [2] Nawrot, Piotr, et al. "Hierarchical transformers are more efficient language models." [3] Clark, Jonathan, et al. “CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation.” [4] Yu, Lili, et al. "Megabyte: Predicting million-byte sequences with multiscale transformers.” [5] Mujika, Asier. "Hierarchical attention encoder decoder.” [6] Dao, Tri, et al. "Flash-Decoding for long-context inference” Pdf: /pdf/dd94fa488c09c222a7e3b202072a237cd8d61da7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Using Time-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs
Accept (poster)
Summary: In this paper, the authors utilize the De Bruijn Graph Neural Networks (DBGNN) to predict centrality in a dynamic graph, thereby trying to address the challenge of large computation workload on time-respecting paths between pairs of nodes. By giving the problem definition, the authors use DBGNN to predict dynamic graph centrality, and showed the comparison with GCN which treat the graph as static one. Strengths: 1) Finding the temporal centrality of a dynamic graph is an interesting question, which is currently suffering the challenge of large computation; 2) The topic is generally interesting and may capture large readership in NeurIPS. 3) The paper is well-written and easy to follow. Weaknesses: 1) In this paper, the author introduces the problem of temporal centralities and use an existing GNN model [Qarkaxhija, 2022] to establish this task. Therefore, my most concern lies in the contribution of this paper. The authors failed to give theoretical analysis to tell why the use of DBGNN can bring this improvement. 2) The backbone model used in this paper, DBGNN, provides a causality-aware view to unfold embedding learning via dynamic graphs. However, in this paper the authors did not continue this idea in centrality prediction, nor give another explanation, which makes it hard to understand why this utilization is significant. 3) Table 3 is not convincing enough. The authors should compare their computation cost with baselines, including both static GNN and dynamic GNN-based approaches. 4) Mode detailed analysis of using DBGNN should be explored, such as ablation study, parameter analysis, etc. Technical Quality: 2 Clarity: 3 Questions for Authors: 1) According to Table 1 and 2 the DBGNN performs better in predicting closeness than betweenness (In Table 2, the DGNN achieves the optimal Spearman’s r among all datasets and hits@10 in 12/13 datasets, while it only achieves 8/13 Spearman’s r and 7/13 hits@10 in Table 1). Can authors provide more detailed analysis of this difference? 2) Will this temporal centrality prediction also be established by other dynamic GNN? Such as EvloveGCN [Pareja, 2020] Roland [You, 2022], WinGNN [Zhu, 2023]. References: [Qarkaxhija, 2022] De Bruijn goes Neural: Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs [Pareja, 2020] EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs [You, 2022] ROLAND: Graph Learning Framework for Dynamic Graphs [Zhu, 2023] WinGNN: Dynamic Graph Neural Networks with Random Gradient Aggregation Window Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: 1) The theoretical analysis of this study is somehow weak; 2) The authors did not show the superiority of this method compared to other state-of-art dynamic GNN models. 3) Result analysis did not give a useful insight. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and the positive aspects highlighted in your report. We appreciate the time you invested to assess our manuscript! For a **clarification of our contribution w.r.t. to DBGNN and the use of additional baseline models**, we kindly refer to our aggregate response. We note that the PDF attached to the aggregate response includes results on ONBRA as an additional baseline model, which we suggest to add to the camera-ready version. Considering the **rational of using DBGNN as a backbone model and a theoretical explanation for the observed improvements**: Referring to the general discussion in our aggregate response, we attribute the observed improvements to the specific nature of the temporal graph learning problem that we address. As explained in our work, temporal betweenness and closeness centrality are fundamentally based on (shortest) time-respecting paths in a temporal graph. It is thus intuitive that the prediction of these temporal centralities requires temporal graph learning methods that - different from window-based temporal GNNs - capture patterns in the temporal ordering of interactions. This is the rational for the choice of DBGNN as a basic learning architecture, which we adapt to apply it to our problem. From a different, theoretical perspective, higher-order De Bruijn graphs - which constitute the layers of our architecture - can be viewed as higher-order Markov chain models for the sequences of nodes traversed by time-respecting paths. This enables them to model sequential patterns that influence the topology of time-respecting paths and thus path-based temporal centralities. Considering the requested **comparison of the speed-ups in Table 3 to other baseline methods**: Thank you for this suggestion. In the PDF attached to the aggregate response, we have added a table that gives the speed-ups for all baseline models. Being much simpler models, static GNN models provide a higher speed-up (but very bad predictions). Similarly, EVO provides higher speed-ups but worse predictions. We finally note that the temporal GNN TGN yields lower speed-ups compared to our method, despite giving worse predictions. We believe that these results provide additional insights and we suggest to add them to the appendix of the camera-ready version. Regarding the comment about **ablation studies and parameter analysis**: We first want to emphasize that for the key hyperparameter of our approach, the maximum order of the De Bruijn graph model, we do not rely on hyperparameter optimization, but we can instead use a statistical model selection to learn the optimal parameter directly from the data (cf. explanation in line 200 of our manuscript). For the ablation study, we highlight that a DBGNN model without second-order De Bruijn Graph layer is actually equivalent to GCN, i.e. the GCN results can actually be seen as an ablation study that takes away the layer that captures patterns of time-respecting paths of length two. For other aspects of the DBGNN architecture with higher-order layers (e.g. the use of a bipartite mapping layer that merges first- and higher-order node representations), the respective ablation studies have already been performed in Ref. [40], which is why we did not deem it necessary to repeat those experiments here. Regarding **question 1 on the difference in performance for temporal betweenness and closeness centrality**: Indeed, we observe that all methods generally perform better for temporal closeness centrality compared to temporal betweenness centrality. We attribute this to the specific characteristics of those centralities, which are rooted in their definitions. The temporal closeness centrality of a node only depends on the *length* of shortest time-respecting paths from that node to all other nodes. Moreover temporal closeness centralities likely exhibit strong correlations between neighboring nodes, which specifically favors a prediction based on neural message passing. In contrast, the temporal betweenness centrality of a node is not only influenced by the length of time-respecting paths but also by the specific sequence of traversed nodes. At the same time, depending on the structure of time-respecting paths, two neighboring nodes can have vastly different temporal betweenness centralities. These factors suggest that the prediction of temporal betweenness centrality is a fundamentally more difficult problem, which is corroborated by our experimental results across all 13 data sets. We will be happy to add this discussion to the camera-ready version of our manuscript. Regarding the **question 2 on other temporal GNNs**, we kindly refer to the discussion of additional baselines in our aggregate response to all reviewers. We again thank the reviewer for the time invested in our manuscript, and for highlighting the strengths of our work. We hope that our comments above clarify the remaining questions. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, I have read them and I choose to raise my score.
Summary: In the article, the authors explore the potential of using TGNN to predict node centrality metrics, such as temporal closeness and temporal betweenness. Despite the straightforward and intuitive approach presented, the issue of predicting temporal node centralities has not been previously addressed within the TGNN community, to the best of my knowledge. The authors tested the method on 13 real-world temporal networks. However, the network sizes are relatively small, with the largest network containing 327 nodes and 188,508 temporal edges, especially compared to several networks in the Temporal Graph Benchmark (TGB), such as tgb-reddit. While this may not pose a significant issue, it should be well-motivated and clearly justified in the article (see below for further details). Strengths: The proposed task is novel and has the potential to open new research directions. The article is well-written and easy to follow. The detailed experimental setup ensures reproducibility. Weaknesses: The authors should test additional models to gain a clearer understanding of how TGNN performs on this task. Additionally, it would be beneficial to include a baseline that approximates the centralities, such as those referenced in [41, 2, 21]. [2] D. A. Bader, S. Kintali, K. Madduri, and M. Mihail. Approximating betweenness centrality. In Algorithms and Models for the Web-Graph: 5th International Workshop, WAW 2007, San Diego, CA, USA, December 11-12, 2007. Proceedings 5, pages 124–137. Springer, 2007. [21] M. Haghir Chehreghani. An efficient algorithm for approximate betweenness centrality computation. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 1489–1492, 2013. [41] M. Riondato and E. M. Kornaropoulos. Fast approximation of betweenness centrality through sampling. In Proceedings of the 7th ACM international conference on Web search and data mining, pages 413–422, 2014. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is there a specific reason the authors chose relatively small temporal networks? 2. It is unclear whether the authors used input features in their analysis. If they did, what was the rationale behind this choice? Since closeness and betweenness do not depend on features, it would be interesting to see how the inclusion/deletion of features affects performance. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors clearly explain the potential limitations of their proposed work and suggest future research directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed assessment of our work. While we have addressed some aspects in our aggregate response to all reviewers, in the following we answer your specific questions: Thank you for your suggestion to compare our method against estimation techniques. We have made additional experiments comparing our method agaist ONBRA (Ref. [46] in our work). You can find the results in the attached PDF and a more detailed explanation in our aggregate response to all reviewers. For the **first question why we mainly used temporal graphs with a relatively small number of nodes**: Our choice of data is based on the need to have a sufficiently large number of observed interactions compared to the number of nodes and edges. This is necessary to have a sufficient number of time-respecting paths to give non-trivial temporal centralities. Most publicly available large data sets on temporal graphs have large numbers of nodes and edges, but are very sparse in terms of observed interactions and thus have a very small number of time-respecting paths. For the **second question about the use of node features** we kindly refer to the aggregate response. In short: we agree with your assessment that betweenness and closeness do not depend on features, so we did not include them. Also, node features (like gender or school classes) were only available for three of the 13 data sets. We thank you for the positive assessment of our work and would be happy to present it at NeurIPS 2024! --- Rebuttal Comment 1.1: Comment: Thank you for your response. I’ve read the motivation and reviewed the new results. I have updated my score.
Summary: This paper proposes an algorithm using time-aware graph neural networks (DBGNN) to predict temporal path-based centralities in dynamic graphs. Experimental results show that this method outperforms a static Graph Convolutional Network (GCN) and two state-of-the-art time-aware graph learning techniques in predicting temporal centralities. Strengths: 1. Innovative Approach: The paper introduces a novel application of De Bruijn Graph Neural Networks (DBGNN) for predicting temporal path-based centralities, addressing a significant computational challenge in dynamic graphs. 2. Comprehensive Evaluation: The method is experimentally validated on 13 temporal graphs from biological and social systems, demonstrating its effectiveness and robustness across different datasets. 3. Performance Improvement: The proposed DBGNN architecture shows substantial improvements in predicting both betweenness and closeness centrality compared to traditional static GCN and other state-of-the-art time-aware graph learning techniques. Weaknesses: 1. Clarity and Readability: In several instances, the mathematical formulas, particularly the update rules and algorithms, are not directly provided in the manuscript but instead are referenced to external literature for these crucial details. This approach can significantly hinder readers' understanding and evaluation of your work. 2. Generalization: The method is evaluated on specific types of temporal graphs (biological and social systems), and its performance on other types of temporal networks remains untested. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Scalability: How does the DBGNN model scale with increasing size and complexity of the temporal graphs? Are there any specific techniques employed to manage large-scale data? 2. Parameter Sensitivity: How sensitive is the model to hyperparameters such as the order of the De Bruijn graph? What guidelines are provided for selecting these parameters? 3. Real-World Application: Can the authors provide more detailed case studies or examples of how this method can be applied in real-world scenarios, beyond the provided experimental evaluation? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: 1. Inadequate optimization of hyperparameters. 2. Lack of utilization of additional node features. 3. Unexplored impact of training data size on model accuracy. 4. Unverified adaptability of the model to a fully inductive setting. 5. Uninvestigated model adjustment strategies for datasets with non-stationary temporal patterns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful review. In the following, we answer your questions (in addition to our aggregate response). Considering the **first question about scalability**: The computational complexity of our model is linear in the number of time-respecting paths of length two in the temporal graph. This number can sbe bounded above by the number of paths of length two in the (static) graph, which can be theoretically bounded by $n \cdot \lambda_{1}^2$, where $n$ is the number of nodes and $\lambda_1$ is the largest eigenvalue of the adjacency matrix of the (undirected) static graph. Thank you for your question on specific techniques employed to handle large-scale data. Our method is able to handle large temporal graphs with millions of observed time-stamped edges. Indeed, we spent considerable effort to implement an efficient GPU-based algorithm to extract time-respecting paths with a given maximum waiting time in large temporal graphs. Due to space constraints, we did not comment on these technical aspects of our implementation, which go beyond the implementation provided by DBGNN. Our code has been integrated into a pytorch-geometric based Open Source library. We will be happy to comment on this in the appendix of the camera-ready version. Regarding **question 2 on parameter sensitivity**: As mentioned by the reviewer, the maximum order of the De Bruijn graph is a key hyperparameter of our approach. Interestingly, we do not need to rely on hyperparameter tuning to choose this parameter, as the optimal maximum order can be directly inferred from the data using a statistical model selection technique (see explanation in line 200 of our manuscript). We consider this a particular strength of our innovative approach. For **question 3 about potential real-world applications**, we argue that our method can be applied in essentially all scenarios where temporal centralities are required. Real-world applications mentioned in the literature include the identification of important nodes in transportation networks based on passenger flow data, or the identification of influential individuals in epidemic spreading or information propagation. For an overview, we refer to ref. [22] of our manuscript. For the comment regarding the **lack of utilization of node features**, we kindly refer to our aggregate response. For the comment on the **inadequate optimization of hyperparameters** we refer to our answer to question 2. For the comments on the **unexplored impact of training data size, inverified adaptability to a fully inductive setting and uninvestigated adustment for non-stationary patterns**, we refer to our discussion of these open issues in the future work section (l. 370 ff). We will finally be happy to make editorial adjustments to further improve the clarity and readability (cf. our aggregate response).
Summary: This work aims to predict temporal node centralities such as temporal betweenness and closeness centralities using a time-aware graph neural network based on higher-order De Bruijn graph models. The empirical experiments on 13 datasets shows that the proposed DBGNN model can predict temporal centralities effectively with computational speed up when compared to the exact computation and outperforms baselines. Additionally, the study finds that static embeddings from the last message-passing layer of the architecture better capture temporal centralities than GCN. Strengths: **novel and important problem**: predicting node centrality in the future is an important and novel problem. Outside of the betweenness and closeness centralities discussed in the paper, this can be a general problem of predicting a property of the node in the future such as predicting user preferences. **approach with significant speed up**: the proposed DBGNN architecture achieves significant speedup when compared to the exact computation of these centralities. Therefore, validating the usefulness of using neural models to speed up these computations. **good performance**: the proposed method outperforms alternative approaches on 13 datasets. Weaknesses: **limited baselines**: currently only a small number of baselines are included in the comparison. Including more baselines from recent literature can further validate the approach. Some example baselines are NAT[1] and DyGformer[2]. **DBGNN is an existing architecture**: DBGNN architecture has been proposed in previous work (as mentioned in the paper as well). Thus the main contribution of this work come from the application of this model in a novel problem setting [1] Luo Y, Li P. Neighborhood-aware scalable temporal network representation learning. InLearning on Graphs Conference 2022 Dec 21 (pp. 1-1). PMLR. [2] Yu L, Sun L, Du B, Lv W. Towards better dynamic graph learning: New architecture and unified library. Advances in Neural Information Processing Systems. 2023 Dec 15;36:67686-700. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. why not directly use Mean Absolute Error (MAE) metric to measure the exact difference between the predicted centralities and the groundtruth one? 2. is it possible to pretrain a DBGNN and use it to predict for unseen networks (but similar to the training one, say from the same domain)? 3. typo in line 58 "13 time temporal graphs" 4. overall the presentation is clear, however some longer sections of text can be divided up and provided with more clear section headings Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the work has been discussed extensively. No negative potential negative societal impact is directly related to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first thank reviewer nntR for the detailed and positive assessment of our work! In our aggregate response, we clarify some of the aspects mentioned in the weaknesses, such as our **contribution over DBGNN or the use of baseline methods**. Importantly, we have performed additional experiments, adding ONBRA as another baseline method (see details in aggregate response and results in attached PDF of the aggregate response). Here we provide additional answers to nntR's specific questions: Regarding **question 1** about the use of the MAE error metric: Thank you for this suggestion. We have added this analysis in the PDF to the aggregate response. We find that the MAE scores across several architectures are comparable. However, a key task in temporal centrality calculation is to find the most central nodes, i.e. to predict a ranking. DBGNN outperforms other methods in this task. We will add these results to the appendix and include an interpretation. Regarding **question 2**, whether it is possible to use pretrained models to predict for unseen networks: This is a great idea, which we have actually highlighted as a promising direction for future researh in the conclusion (l 381 ff). We will explore this in future work. Regarding **questions 3 and 4**: We will be happy to adopt reviewer nntR's editorial suggestions in the camera-ready version. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I thank the authors for addressing my concerns, I choose to retain my current score.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive assessment. We were delighted that reviewer nntR highlights that we address a "novel and important problem" and propose an "approach with significant speed up" and "good performance". zsX2 praises our "innovative approach" and "comprehensive evaluation" that shows "substantial improvements [...] compared to traditional static GCN and other state-of-the-art time-aware graph learning techniques". Gqof argues that our work "has the potential to open new research directions" and that our article is "well-written". LDDR agrees that our work addresses an "interesting question", that it "may capture large readership in NeurIPS", and that our article is "well-written and easy to follow". In this aggregate response we address questions raised by more than one reviewer. Reviewers **nntR, Gqof and LDDR suggested to compare our approach to additional baseline methods**: We first want to highlight that the architectures mentioned by reviewer nntR and LDDR do not suit our problem in terms of the prediction task or the type of patterns needed to predict temporal centralities. These temporal GNNs address the prediction of dynamic node properties in evolving graphs. In contrast, we need to predict a single (aggregate) temporal centrality for each node, while accounting for temporal patterns. Moreover, those methods use sliding aggregation windows to update node representations, which discards information on the temporal ordering of edges (and thus time-respecting paths) within each window. Adapting these methods to address our task is a challenge by itself, which we addressed for TGN. We see TGN as a representative example for the class of temporal GNNs that - different from DBGNN and EVO - do not explicitly consider time-respecting paths. While these specifics of our problem limit the choice of suitable deep learning baselines, we agree with reviewer Gqof that it would be beneficial to include a baseline that approximates centralities. While the works referenced by reviewer Gqof address *static* centralities, ref [46] of our manuscript recently proposed the heuristic method ONBRA, which can estimate temporal betweenness centrality with varying degrees of fidelity. To compare our method to this additional baseline, we performed additional experiments included in the attached PDF. By choosing a suitable number of samples, we adjusted the estimation fidelity of ONBRA such that the estimation algorithm took approx. the same time as our method. The results of these additional experiments are in the attached PDF and they confirm that (i) our method provides a considerably higher performance in terms of rank correlation for large data sets, and (ii) generally lower mean absolute error across all data sets. Moreover, ONBRA failed to return results for three data sets where our method shows high performance. **nntR and LDDR had questions about our contributions**: As highlighted by all reviewers, a major contribution of our work is the formulation of a novel and practically relevant temporal graph learning problem, which opens opportunities for future research and is of high relevance for the NeurIPS readership. Moreover, we show that a time-aware GNN can be adapted to address this problem, outperforming existing graph learning methods and a recently proposed algorithm to estimate temporal betweenness. Another contribution is that - using a practically relevant and new problem - our work showcases the potential of causality-aware GNNs like DBGNN. Different from window-based temporal GNNs like, e.g., TGN, such methods are sensitive to the microscopic temporal ordering of time-stamped edges, which influences time-respecting paths. Moreover, different from EVO, which considers time-respecting paths, DBGNN can be trained in an end-to-end fashion. Going beyond [40], which introduced DBGNN, we make two important technical contributions: First, we adapt DBGNN to address a node-level regression task, which to the best of our knowledge has not been done before. Second, compared to [40], we adopt a fundamentally different training procedure suitable for a forecasting scenario: Different from the node classification task in [40], we first learn a model in a training window. We then refit this pre-trained model to forecast temporal centralities in a future observation of a temporal graph, which most likely includes different nodes and edges. The fact that our approach can address this challenging forecasting problem makes us hopeful that it can be generalized to other temporal graph forecasting problems that require to make predictions for future temporal graphs with previously unobserved nodes or edges. **zsX2 and Gqof** had questions about the **use of node features**: While our approach generally allows to use node (or edge) features, we did not consider them - except for one-hot-encodings of nodes - in our experiments for two reasons: First, node features are only available for three of the 13 data sets used in our paper and we wanted to compare the performance on equal grounds. Second, as mentioned by Gqof, we do not expect temporal centralities to depend on node features. For responses to individual questions by reviewers, we kindly refer to our responses to the respective reviewers. In summary, our work is an important and timely contribution to the temporal graph learning community. It addresses the need for causality-aware temporal graph learning methods (which has recently been highlighted in [1]) and proposes a novel and practically relevant problem that showcases their potential. We thank the reviewers for their positive assessment and helpful suggestions. In particular, the additional results included in the attached PDF strengthen our contribution and we propose to add them to the appendix. We will address all editorial suggestions in the camera-ready version. [1] https://towardsdatascience.com/temporal-graph-learning-in-2024-feaa9371b8e2#bfbb Pdf: /pdf/7bf1172f3926562c5288fd8aa9c6f5ef7e111b1b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improving Generalization in Federated Learning with Model-Data Mutual Information Regularization: A Posterior Inference Approach
Accept (poster)
Summary: This paper proposed a Federated Learning model that support Bayesian inference. To alleviate potential bias induced from local client data, a regularisation constraint on model-data mutual information is introduced. The authors show that the MCMC inference with the regularisation can be implemented through the stochastic gradient Langevin dynamics. The author also proves a generalisation bound. Strengths: - Information-theoretical modelling of federated learning is not very common. More exploration in this area is important. - The authors provide justifications to most of the design decisions and they look sound to me. Weaknesses: - The writing is sometimes difficult to follow. The relevance of some theoretical analyses is not always clear. It would be better to introduce Algorithm 1 in the main text early on. Steps in line 10 and line 16 seem to be the key differences of the proposed method; They should be reflected in Figure 1. - The proposed method is computationally more complex than the point estimates. The experimental results of FedMDMI do not show significantly better performance compared to FedEP. A comparison of computational cost is required to understand the potential advantages and compromises compared with all baselines. - FedMDMI has been evaluated on small models that may not fully reflect its performance in scenarios requiring larger and more complex architectures such as ResNets. Evaluating FedMDMI on such models would provide a more comprehensive evaluation of its scalability and generalization capabilities. - The Dirichlet distribution is used to simulate class imbalance data heterogeneity. Using heterogeneous datasets provided by TensorFlow Federated and LEAF would ensure that the method is evaluated under natural heterogeneous data distributions. Technical Quality: 3 Clarity: 3 Questions for Authors: - Figure 3 is very busy. Can you elaborate how they show the proposed method "ourperforms the other algorithms"? In particular, if the MI constraint regularises the effect of local client bias, why does the proposed method show volatile learning curve? - Discuss the computational complexity of the proposed approach and how adjusting the batch size affects the performance. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank for the comments, and our response can be found in the following. **Comment 1-The writing is sometimes difficult to follow:** 1) Due to the space limit of the main text, we placed the theoretical proof, more detailed analysis, and Algorithm 1 in the Appendix of our submission. To improve readability, we will include these core steps of theoretical analysis and Algorithm 1 in the main text in the final version of this paper, where there is one additional page allowed. 2) Line 10 of Algorithm 1 represents the local SGLD update, which has been reflected in Figure 1 as the “SGLD for sampling” step. In the modified version, we will also reflect the global sliding average step (i.e., Line 16 of Algorithm 1) in Figure 1. **Comment 2-A Comparison of Computational Cost:** Please see our response to common Comment 1. Here, we first provide a complexity analysis concerning the computation time, storage, and communication for our method compared to other Bayesian methods. Subsequently, our experiments examine the relationship between computation time and model dimension within a single communication round. It is shown that the computation time of our algorithm is significantly superior to that of FedEP. **Comment 3-Additional Experiments on ResNet-18:** Following this suggestion, we have evaluated the performance of our FedMDMI by using ResNet18 on the CIFAR-10 and CIFAR-100 datasets, with the following observations. Table 13 shows that our FedMDMI still outperforms other comparison methods in terms of the generalization performance, with the ResNet-18 architecture. Here, we replace the batch norm with group norm and set the number of clients to 20, such that the 10% and 5% sampling rates correspond to only two and one client participating in training per communication round, respectively. This may highlight the robustness of our FedMDMI to some possible model-architecture changes and its ability to adapt to various models. **Comment 4-Additional datasets provided by TensorFlow Federated and LEAF:** Please note that in the original manuscript, we used LEAF to generate the heterogeneous Shakespeare dataset, with each client associated with a speaking role comprising a few lines. Following this advice, we have further used TensorFlow Federated to generate the Stack Overflow dataset. Similar to [D1], due to the limited graphics memory and rebuttal time, we utilize only a sample of 200 clients from the original dataset, with the following observations. As shown in Table 14, on the Stack Overflow dataset, our FedMDMI continues to outperform the majority of other optimization algorithms designed to address the data heterogeneity issue. This can be attributed to our proposed model-data mutual information regularization that enhances generalization. [D1] G. Cheng _et al._, "Federated asymptotics: a model to compare federated learning algorithms," in _Proc. AISTATS_, 2023. **Comment 5-Further Clarification on Fig. 3:** i) Figures 3(a) and 3(b) provide an effective visual representation of model calibration. In a well-calibrated model, the difference between confidence and accuracy should be close to zero for each bin. Our FedMDMI (black curve) shows a closer alignment to zero across most bins, indicating superior calibration. This is further supported by Table 3, where our FedMDMI exhibits the lowest expected calibration error in most cases, particularly on the CIFAR-100 dataset. ii) Figures 3(c) and 3(d) show the convergence behavior of different algorithms. Here, we would like to show that though our FedMDMI does not converge the fastest, especially compared to the traditional optimization-based algorithm SCAFFOLD, it still slightly outperforms the other Bayesian inference-based baselines. In fact, we have also given an explanation on this observation in Line 358 of the original manuscript: "One potential explanation is our use of stochastic gradient Langevin dynamics (SGLD) to approximate the posterior, which often suffers from a slow convergence rate due to the variance introduced by the stochastic gradient." iii) Figures 3(c) and 3(d) also show that the final convergence of our FedMDMI achieves higher accuracy. This was further demonstrated in Table 2. Since the CIFAR-10 dataset is relatively simple, the baselines can easily achieve high accuracy, making our improvement less significant. However, for the more complex datasets like CIFAR-100 and Shakespeare, our FedMDMI shows a more significant improvement. The volatile learning curve mentioned by the reviewer may be due to the fact that it was plotted averaged over 5 random seeds and drawn by sampling 25 points apart, resulting in a curve that is not completely smooth. In addition, we will show here the standard deviation of the accuracy on each convergence curve when it is close to convergence (i.e., 2000-4000 rounds), as follows. For **CIFAR10 on Dir (0.2)-L**: FedAvg (0.44%), FedPA (0.74%), FedEP (0.34%), FedBE (0.51%), FALD (1.17%), FedMDMI (0.28%). For **CIFAR100 on Dir (0.2)-L**: FedAvg (0.58%), FedPA (1.19%), FedEP (0.85%), FedBE (0.69%), FALD (0.96%), FedMDMI (0.79%). Such experimental results show that, in most cases, our FedMDMI is less volatile during the convergence stage compared to other posterior inference-based algorithms. **Comment 6-Discussion on Computational Complexity:** Please also see our response to common Comment 1. For the batch size, in the original manuscript it was set to the default value of 50 for CIFAR-10 and CIFAR-100, following the same setting as in the other baselines. We have also conducted additional experiments to demonstrate the effect of batch size on performance. The results are shown in Table 12. Our method demonstrates greater robustness across different batch sizes compared to other baselines. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I appreciate the author's effort in improving the paper. I will raise my score. --- Rebuttal 2: Title: Official Comment by Authors: Comment: Dear Reviewer hP14, Many thanks for the time and effort that you have dedicated to reviewing our paper and providing these insightful comments, which will further help enhance the quality of the final version of our manuscript.
Summary: The paper proposes an approach to mitigate training failure in the heterogeneous federated learning setup. The approach combines Bayesian perspective of posterior inference on the client side and regularization of mutual information between weights and data in order to reduce the effect of difference in the local datasets. The authors provide computable approximations of the values involved and provide information theoretic bound on the generalization in federated setup. Experiments show that the method succeeds and outperforms several baselines. Strengths: The paper gives a detailed explanation of the approach for heterogeneous federated learning with mutual information regularization and posterior estimation. A generalization bound is provided and extensive experiments are performed. Weaknesses: The motivation to introduce posterior inference in federated setup is not clear: The problem of the point estimation and inability to have uncertainty of the predictions is equally valid for the centralized setup as well. The further description of the approach is convoluted, it seems that the main reason to resort to posterior estimation is to be able to obtain tractable computation for the mutual information term and corresponding generalization bound from PAC-Bayes perspective. The method is motivated by scarce local data that might lead to overfitting and prevent from training a global model when aggregated, but this setup is not checked empirically. Only the heterogeneity with respect to labels distribution is evaluated. Technical Quality: 2 Clarity: 3 Questions for Authors: 1 - In equation (20) there are two different mutual information terms, I(w;S) and I(w;D). What is the difference between them? 2 - What are the conditions under which it is possible to decompose posterior of the global model into the product of posteriors of the local models? 3 - In the conclusion you claim to show that the optimal posterior is Gibbs posterior, but as I understand you used this result from previous research? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank for the comments, and our response can be found in the following. **Comment 1-Further clarification on Our Motivation:** We agree with the reviewer that in centralized learning, posterior inference is proposed to provide a more reliable assessment of model uncertainty than point estimation. This uncertainty estimation is also crucial in the safety-critical applications of federated learning, such as autonomous driving, healthcare, and finance. Consequently, there has been some recent literature, such as FedPA and FedEP, considering the introduction of posterior inference into the federated setup. However, we found that these Bayesian inference-based federated learning algorithms do not adequately address the data heterogeneity issue. Therefore, we are motivated to propose a mutual information regularization to enforce the global posterior to learn essential information from the heterogeneous local data, thereby improving the generalization capability. To optimize this regularization, we further employ a series of techniques, including the global mutual information decomposition, PAC-Bayesian conclusions, and stochastic gradient Langevin dynamics (SGLD) sampling. **Comment 2-Additional Experiments on Scarce Local Data:** Thank you for your comments. We have conducted additional experiments to analyze the impact of the number of local data samples. Given that the total number of data samples in CIFAR-10 and CIFAR-100 is fixed, we controlled the number of samples on each client by adjusting the number of clients. With fewer data samples on a client, the local model is more prone to overfitting. The results are shown in Table 11. Our FedMDMI maintains superior performance even with scarce local data, demonstrating that our MI regularization-based posterior estimation effectively alleviates the overfitting caused by data scarcity. **Comment 3-The difference between mutual information terms, I(w; S) and I(w; D):** In Eq. (20), $I(w; S)$ relates to the participating generalization error, i.e., the difference between the empirical and expected risk for the participating clients. This serves as the regularized term that can be estimated in our FedMDMI. On the other hand, $I(w; D)$ relates to the participation gap, i.e., the difference in the expected risk between the participating and non-participating clients. This term, however, cannot be estimated due to the unavailability of the non-participating clients. Here, we will illustrate the difference with a simple example. Consider an extreme case with 10 clients, each holding only one label of the MNIST dataset. In this scenario, we restrict federated training on the first 9 clients, which contain train dataset labeled 0-8. During the test phase, the error measured on test sets from labels 0-8 is referred to as the participating generalization error $I(w; S)$. The error measured on the data from the $10$-th client, which holds label 9, is termed as the participation gap $I(w; D)$. Thus, $I(w;D)$ and $I(w;S)$ do not represent a compromise or antagonistic relationship. By optimizing $I(w;S)$, we can improve the performance of the learned model on the distribution seen during training. Consequently, if the distribution of clients that have never participated is the same as the distribution seen by trained clients, then $I(w;D)$ degenerates to $I(w;S)$. In other words, our FedMDMI can only guarantee improved generalization on the distribution seen during training, but not on the out-of-distribution data from non-participating clients. This could motivate an interesting future direction: reducing $I(w;D)$ through techniques like transfer learning or domain generalization. **Comment 4-What are the conditions under which it is possible to decompose the posterior of the global model into the product of posteriors of the local models?** The decomposition of the global posterior into the product of posteriors of the local models is based on two assumptions: 1. The global likelihood is conditionally independent given $w$, i.e., $p(S_1,\dots,S_m | w) = \prod_{i=1}^m p(S_i | w)$. 2. The ratio of the global prior to the product of client priors, $\frac{p(w)}{\prod_{i=1}^m p_i(w)}$, is considered a constant based on some prior assumptions, such as Gaussian priors or uniform priors. These assumptions are widely recognized and significant across various fields, including in PoE [C1] and (EP)-MCMC [C2, C3], and Bayesian FL [C4]. [C1] Hinton, Geoffrey E., "Training products of experts by minimizing contrastive divergence," Neural computation 14(8): 1771-1800, 2002. [C2] Neiswanger, Willie, Chong Wang, and Eric P. Xing, "Asymptotically exact, embarrassingly parallel MCMC," in _Proc. UAI_, 2014. [C3] Wang, Xiangyu, and David B. Dunson, "Parallelizing MCMC via Weierstrass sampler," arXiv preprint arXiv:1312.4605, 2013. [C4] Al-Shedivat, Maruan, _et al._, "Federated learning via posterior averaging: A new perspective and practical algorithms," in _Proc. ICLR_, 2021. **Comment 5-In the conclusion, you claim to show that the optimal posterior is the Gibbs posterior, but as I understand, you used this result from previous research?** We agree with the reviewer that this is not our contribution. To avoid confusion, we emphasized in the original manuscript that "this conclusion is well-established in the field of PAC-Bayesian learning" in the Introduction (Line 64). We also stated that, "In order for our paper to be self-contained, we re-state the proof from [12, 44] here for the optimal posterior" in the Proof (Line 988). To avoid overclaiming our contribution, we will further emphasize that the result of the optimal posterior being the Gibbs posterior stems from previous research. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal and raise my score. --- Reply to Comment 1.1.1: Title: Official Comment by Authors: Comment: Dear Reviewer XkDq, We sincerely appreciate the time and effort that you have dedicated in reviewing our paper and providing these insightful comments, which will further help improve the quality of the final version of our manuscript.
Summary: In this paper, the authors introduce a method for federated learning to bypass problems caused by inter-client data heterogeneity. For this, they introduce a Bayesian approach with information-theoretic regularizer, that will prevent local models from overfitting. Specifically, the authors add a model-data regularizer at a global level and then show how it could be computed in a federated fashion. The local optimal posterior appeared to be the Gibbs posterior, and the authors employ SGLD to sample from it. To show the efficacy of the approach, authors conduct a series of experiments on image and text data at federated datasets. Strengths: I think the paper has the following strengths: - I find the paper very easy to follow, and the suggested idea is very interesting and natural; - It tackles an important problem of data heterogeneity in Federated Learning; - The paper provides detailed theoretical derivations and experimental evaluation; Weaknesses: I find the following things are downsides: - The loss used in the optimization is a result of several levels of approximations. First, the global model-data MI term is upper-bounded by the sum of local model-data MI terms. Second, each local model-data term is itself upper-bounded by the RHS of Eq. 17. I think that the discussion on the tightness of the upper bound is missing. - I feel that some baselines are missing. Probably the first and classical approach to combat the problem of data heterogeneity was FedProx [1] (which was cited), but it is not compared with. Also, it would be interesting to see, where the method is placed if compared with personalized approaches, that are specially built to deal with data heterogeneity. E.g. FedPop [2] (Bayesian), FedRep [3] (not Bayesian). [1] Li T. et al. Federated optimization in heterogeneous networks //Proceedings of Machine learning and systems. – 2020. – Т. 2. – С. 429-450. [2] Kotelevskii N. et al. Fedpop: A bayesian approach for personalised federated learning //Advances in Neural Information Processing Systems. – 2022. – Т. 35. – С. 8687-8701. [3] Collins L. et al. Exploiting shared representations for personalized federated learning //International conference on machine learning. – PMLR, 2021. – С. 2089-2099. - Minor typos: 1) Lines 142-143 collapsed (negative vspace?); 2) Figure 2: The left y-axis lives in [0, 1], right y-axis in [0, 100]. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Line 76 it is said that the MI term (...) "offers a certain client-level privacy protection as a byproduct." I think the authors imply that the MI term regularizes the fitness of a local model. I wonder, how overfitness can compromise the privacy of training data. - In lines 171-175 there is a reasoning about some bias factor $\delta$ that affects the generation of local data. What is $\delta$? Why does inequality in 174 hold? Can you elaborate more on the last sentence in lines 174-175? - In lines 212-213 authors use the term Variational Approximation (VA). I have a feeling that the term Variational Inference (VI) is a more common alternative to MCMC met in literature. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have a separate section on limitations. However, not all of them are addressed (see the Weaknesses section, about upper-bound on a loss). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank for the comments, and our response can be found in the following. **Comment 1-Discussion on tightness of the upper bound:** First, we are constrained in practice to only leverage local data $S_i$ at individual client $i$ under the FL settings. Alternatively, based on the Global Model-Data MI Decomposition in Proposition 4.1, we have: $$I(w ; S) = \sum_{i=1}^m \left[ I\left(w ; S_i\right) - I\left(S_i ; S^{i-1}\right) \right] \le \sum_{i=1}^m I\left(w ; S_i\right).$$ Here, we can upper bound $I(w ; S)$ by the sum of local MI terms $I(w ; S_i)$ that can be estimated locally by these clients. By doing so, we also introduce an estimation error, i.e., the data correlation $I\left(S_i ; S^{i-1}\right)$ between the individual clients, which determines the tightness of the proposed upper bound $\sum_{i=1}^m I\left(w ; S_i\right)$. This data correlation $I\left(S_i ; S^{i-1}\right)$ between the individual clients is related to the clients' data generation and collection, which is intractable to estimate and optimize. Then, each local MI term $I(w; S_i)$ can be further expressed as: $$ I(w ; S_i) = \mathbb{E}\_{p(S_i)}[\operatorname{KL}(p(w \mid S_i) \| p_i(w))] = \mathbb{E}\_{p(S_i)}[\operatorname{KL}(p(w \mid S_i) \| r(w))]-\operatorname{KL}[p_i(w) \| r(w)] \leq \mathbb{E}\_{p(S_i)}[\operatorname{KL}(p(w \mid S_i) \| r(w))]\],$$ where $p_i(w) \triangleq \mathbb{E}_{p(S_i)}[p(w | S_i)]$ denotes the oracle prior and $r(w)$ is an arbitrary prior distribution that is used to approximate $p_i(w)$. This may incur another estimation error, which is the difference between the oracle prior $p_i(w)$ and the prior $r(w)$ we actually use, i.e., $\operatorname{KL}\left[p_i(w) \| r(w)\right]$. As indicated in Line 240, [B1] emphasizes that to achieve a small KL divergence, the prior must, in essence, predict the posterior. Specifically, [B1] employs a distribution of pre-trained models derived from a portion of the untrained data as prior $r(w)$. In our work, we propose using the global model in the previous round as the mean $\mu$ of Gaussian prior and the uncertainty introduced by all clients in the previous round as the covariance $\Sigma^{-1}$ in the prior, as shown in Eq. (18) and Line 6 in Algorithm 1 of our FedMDMI. This incorporates the global data information and potentially helps predict the local priors based on the global posterior decomposition. [B1] G. Dziugaite _et al._, "On the role of data in PAC-Bayes bounds," in _Proc. ICML_, 2021. **Comment 2-Additional baselines:** Please see our response to common Comment 2. **Comment 3-Minor typos:** Thank you for pointing out the small issue with collapsed Lines 142-143, and we will fix it. Additionally, in Figure 2, the ordinate represents the percentage error. Specifically, the left figure indicates that the training error is within the range of [0, 1%], while the right figure indicates that the test error is within the range of [0, 20%]. **Comment 4-Relationship between Overfitness and Privacy of Training Data:** The MI term $I(w;S)$ regularizes the fitness of a local model. In fact, the hyperparameter $\alpha$ controls the degree of the MI regularization and embodies a trade-off between privacy and fitting (or trade-off between generalization and fitting). As highlighted in Line 275, the proposed MI regularizer $I(w;S)$ directly quantifies the extent to which the model memorizes data. A larger value of $I(w;S)$ implies that the model fits the data well, thus signifying the fitting. Conversely, a smaller $I(w;S)$ indicates that the model avoids memorizing excessive data details, thereby promoting generalization and, from another perspective, privacy protection. In fact, some work [B2] demonstrates that in certain cases, knowing the previous model and the gradient update from a client can allow one to infer a training example held by that user. Therefore, for stronger privacy protection, differential privacy can be used to encrypt these models or gradient updates. Our work shows that regularizing the MI term can provide a form of differential privacy protection as a byproduct. [B2] P. Kairouz _et al._, Advances and open problems in federated learning, Foundations and Trends in machine learning, 2021. **Comment 5- Further Clarification on Factor $\delta$ :** Here, we would like to explore and offer further insights, from the perspective of FL, on how minimizing mutual information can help mitigate bias resulted from the data heterogeneity. In FL, we denote the bias factor as $\delta$ that affects the generation of local heterogeneous data (such as the difference in clients' locations, preferences, or habits). If we can effectively eliminate the influence of the bias factor $\delta$, the entire dataset would become homogeneous (similar), thereby reducing bias among local posteriors. Indeed, directly eliminating the impact of bias factor is impractical. But leveraging Markov chain $\delta \rightarrow S \rightarrow w$ allows us to directly infer that $I(w; \delta) \le I(w; S)$, where the inequality holds due to the **Data Processing Inequality (DPI)**. As a consequence, we can diminish $I(w; S)$ to constrain $I(w; \delta)$, thereby rendering the model $w$ insensitive to the bias factor $\delta$ from the diverse clients. Furthermore, similar analyses are prevalent in other contexts. For instance, in centered learning, literature [B3] (Section 2.2 on Page 6) assumes that nuisances affect the observed data, and these nuisances are mitigated through information bottleneck regularization (another well-known mutual information). [B3] A. Achille _et al._, "Emergence of invariance and disentanglement in deep representations," Journal of Machine Learning Research, 2018. **Comment 6-Comparison between MCMC and Variational Approximation (VA):** Please see our response to common Comment 3. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your detailed response. I appreciate how you addressed the concerns I raised. Therefore, I am willing to raise my score from 5 to 6. I suggest adding results from new experiments to the revised version, including the running time, to make your evaluation more complete. It would also be helpful to mention relevant literature on Bayesian Federated Learning to give readers more context. --- Reply to Comment 1.1.1: Title: Official Comment by Authors: Comment: Dear Reviewer re5T, We sincerely appreciate your time and effort in reviewing our paper and providing valuable comments. We will incorporate these additional baselines and new supplementary experiments into the revised version. Additionally, personalized Bayesian federated learning will also be included in the discussion. These insights will further help enhance the quality of the final version of our manuscript.
Summary: The paper considers the problem of Bayesian Federated Learning when there’s data heterogeneity and class imbalance across clients and develops a posterior inference approach for model parameters through mutual information regularization of the data and global parameters in local posteriors. This is achieved via using the KL formulation of the mutual information and showing that the optimal local posterior is a Gibbs distribution. To infer local posterior Stochastic Gradient Langevin Dynamics is used and to aggregate the local posteriors into the global posteriors a simple average of the local samples is considered. The theoretical results suggest that the proposed algorithm results in the convergence of the local and global posteriors and corresponding generalization bounds are provided. Results are shown on a few datasets where competitive test performance and uncertainty calibration are achieved. Strengths: * The problem considered is timely and important. In almost every application of federated learning, client data heterogeneity and scarcity are inevitable. For data scarcity, considering the model uncertainty (through Bayesian modeling) and for data heterogeneity removing the biases of local and global posteriors due to biases in local datasets are sensible directions. * Accompanying the empirical results and intuitions with theoretical arguments is another strength of the paper. * The writing is clear and understandable, and the organization of the arguments is intuitive and logical. Weaknesses: * Although competing (Bayesian) methods are not specifically designed to deal with data heterogeneity there seems to be very little to no gain achieved by the model. The results in the current form seem weak to me. The performance gains are quite marginal (e.g. Fig. 3) both for uncertainty calibration and test performance. Are the performance differences provided in tables 2, and 3 statistically significant? Shouldn’t we assume that as the degree of heterogeneity increases the proposed method is more effective? Why don’t we see a larger gap as a function of $\\alpha$? * A few important papers seem to be missing from the introduction such as \[1\]. Some methods are not considered for the comparisons such as \[2,3\]. * The only information about the time (and memory) complexity is a sentence somewhere close to the end of the paper. While an important motivation of the paper is to reduce the computational and memory cost, empirical timing results aren’t shown. Can you include a detailed timing comparison as a function of data heterogeneity, dimension and size of the network, etc? Intuitively variational methods should do a much better job compared to sampling-based methods. Although the stochastic version of the Langevin Dynamics is used here still they should take much longer than variational methods to converge. \[1\] Cao, Longbing, et al. "Bayesian federated learning: A survey." arXiv preprint arXiv:2304.13267 (2023). \[2\] Chen, Hui, et al. "FedSI: Federated Subnetwork Inference for Efficient Uncertainty Quantification." arXiv preprint arXiv:2404.15657 (2024). \[3\] Kim, Minyoung, and Timothy Hospedales. "Fedhb: Hierarchical Bayesian federated learning." arXiv preprint arXiv:2305.04979 (2023). Technical Quality: 3 Clarity: 3 Questions for Authors: Even though I went through the paper in full I’m still not fully convinced that mutual information regularization is a good idea. The bound provided in Eq. 20 suggests that the generalization error is bounded by two mutual information terms one of which is the target of estimation of the paper. How can we argue that there’s no tradeoff between lowering the first and second terms? Why does reducing the first term result in a lower generalization error? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank for the comments, and our response is as follows. **Comment 1-Further Clarification on Fig.3:** Fig. 3 visualizes uncertainty calibration and convergence, with quantitative results in Tables 2 and 3 highlighting improvements of our FedMDMI. i) Figs. 3(a) and 3(b) provide an effective visual representation of model calibration. In a well-calibrated model, the difference between confidence and accuracy should be close to zero for each bin. Our FedMDMI (black curve) shows a closer alignment to zero across most bins, indicating a superior calibration. This is further supported by Table 3, where our FedMDMI exhibits the lowest expected calibration error in most cases, particularly on CIFAR-100 dataset. ii) Figs. 3(c) and (d) illustrate convergence behavior. Although FedMDMI converges slower than SCAFFOLD, it slightly outperforms other Bayesian baselines. This is explained in Line 358 of the manuscript, attributing the slower convergence to the variance from stochastic gradient Langevin dynamics (SGLD). iii) Figs. 3(c) and (d) also show that the final convergence of our FedMDMI achieves higher accuracy. This was further demonstrated in Table 2. Since CIFAR-10 dataset is relatively simple, the baselines can easily achieve high accuracy, making our improvement less significant. However, for the more complex datasets like CIFAR-100 and Shakespeare, our FedMDMI shows a more significant improvement. iv) Regarding our response to Comment 6, the computational and storage complexity of our FedMDMI is lower than that of most other Bayesian FL algorithms. **Comment 2-Performance differences in Tables 2 and 3:** Please note that results presented in Tables 2 and 3 are averaged over five random seeds, as indicated in the caption of both tables. **Comment 3- Degree of heterogeneity and effectiveness:** Yes, we agree with the reviewer. As the degree of heterogeneity increases, performance of all baselines decreases, but our method's performance decreases less significantly. This indicates that our FedMDMI is more robust to increasing heterogeneity compared to the baseline. Specifically, for CIFAR-10, as the degree of heterogeneity increases from Dir(0.7)-H to Dir(0.2)-H, performance of our FedMDMI and FedAvg decreases by 0.2% and 0.63%, respectively. Similarly, for CIFAR-100, performance of our FedMDMI and FedAvg decreases by 1.01% and 2.03%, respectively. This trend holds in most cases, indicating that our proposed posterior inference approach based on model-data mutual information regularization can effectively alleviate the impact of data heterogeneity. **Comment 4-Discussion of $\alpha$:** The Lagrange multiplier $\alpha$ in Eq. (7) balances fitting ($L_{S_i}(w)$) and generalization ($I(w; S_i)$). Fig. 2 illustrates that as $\alpha$ increases, the gap between test errors across varying data heterogeneity decreases, indicating the efficacy of our proposed regularization in addressing data heterogeneity. Specifically, increasing $\alpha$ leads to a gradual rise in train error and an initial drop followed by an increase in test error, suggesting that lower complexity (larger $\alpha$) may cause underfitting, while higher complexity (smaller $\alpha$) risks overfitting. Thus, larger $\alpha$ mitigates the bias from data heterogeneity. **Comment 5-Additional Baselines:** Please see common comment 2. **Comment 6-Complexity Analysis:** Please see common comment 1. **Comment 7-Comparison of MCMC and VA:** Please see common comment 3. **Comment 8-Effectiveness of MI regularization:** As discussed in the Related Work section, Model-Data Mutual Information (MDMI) regularization is a well-established concept in centralized learning, supported by both theoretical analyses and extensive experimental verification. For instance, [A1] demonstrates that MDMI regularization outperforms L2-norm regularization and dropout through Fisher information matrix estimation. For FL, we propose a federated posterior inference method to estimate the global mutual information via global MDMI Decomposition. Our experiments indicate that MDMI regularization is equally effective in FL. [A1] Z. Wang *et al.*, "Pac-Bayes information bottleneck." in *Proc. ICLR*, 2022. **Comment 9-Difference between mutual information terms:** In Eq. (20), $I(w; S)$ relates to the participating generalization error, i.e., the difference between the empirical and expected risk for participating clients. This serves as the regularized term that can be estimated in our FedMDMI. On the other hand, $I(w; D)$ relates to the participation gap, i.e., difference in the expected risk between participating and non-participating clients. This term, however, cannot be estimated due to unavailability of the non-participating clients. Here, we will illustrate the difference with a simple example. Consider an extreme case with 10 clients, each holding only one label of MNIST dataset. In this case, we restrict federated training on the first 9 clients, which contain train dataset labeled 0-8. During the test phase, the error measured on test sets from labels 0-8 is referred to as the participating generalization error $I(w; S)$. The error measured on data from the $10$-th client, which holds label 9, is termed as the participation gap $I(w; D)$. Thus, $I(w;D)$ and $I(w;S)$ do not represent a compromise or antagonistic relationship. By optimizing $I(w;S)$, we can improve performance of the learned model on the distribution seen during training. Consequently, if the distribution of clients that have never participated is the same as the distribution seen by trained clients, then $I(w;D)$ degenerates to $I(w;S)$. In other words, our FedMDMI can only guarantee improved generalization on the distribution seen during training, but not on the out-of-distribution data from non-participating clients. This could motivate an interesting future direction: reducing $I(w;D)$ through techniques like transfer learning or domain generalization. --- Rebuttal Comment 1.1: Title: Updated review Comment: Thank you for your detailed explanation of the figures and results. The new experiments and results are helpful for better understanding the empirical aspects of the contribution. Please see below for an updated review. **Comment 1-Further Clarification on Fig.3:** I understand what the plots are representing. To me, it’s surprising that the lines all lie very close together and very far from zero (the optimal line). Adding shaded error bars representing variance to all plots in Fig. 3 would be very helpful. **Comment 2-Performance differences in Table 3:** Can you report the variance around those mean performances as well? **Comment 3- Degree of heterogeneity and effectiveness:** The pattern you’re referring to is not consistent across all methods (e.g. FedEP). Since no other model is designed to handle data heterogeneity I’m very surprised that FedMDMI doesn’t outperform all other models by a large margin as the heterogeneity hyperparameter increases. Can you make a line plot of the ECE and top-1 accuracy as a function of heterogeneity to see if the line corresponding to FedMDMI diverges at some point? **Comment 1-Complexity analysis:** As I understand, the computational and memory complexity is provided for a single round of SGLD. The main question is how long it takes for each method to converge. Depending on the problem this could result in better or worse convergence than optimization-based methods [1,2]. In general, it’s believed that for real-world problems variational methods are faster in high dimensions than sampling-based methods. Therefore I’m surprised that the timing results in the rebuttal are in favor of FedMDMI. Do the authors have any explanation for this? **Comment 3-Comparison of MCMC and Variational Approximation (VA):** While the motivation is solid and sensible, ultimately it’s an empirical question whether variational methods or sampling-based methods perform best for FL with data heterogeneity. Is any of the compared methods based on a variational framework? [1] Ma, Yi-An, et al. "Sampling can be faster than optimization." Proceedings of the National Academy of Sciences 116.42 (2019): 20881-20885. [2] Kim, Kyurae, Yian Ma, and Jacob Gardner. "Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing?." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. --- Rebuttal 2: Title: Official Comment by Authors: Comment: Dear Reviewer 4nGu, Thank you for the updated review. We have provided our response to each of these concerns in the following. **Comment 1-Further Clarification on Fig. 3:** Regarding the plots in Figs. 3(a) and 3(b), as noted by the reviewer, the lines all lie very close together and very far from zero. This observation actually highlights a key challenge in the FL settings: the scarcity of data samples at each client increases the risk of local model overfitting, leading to overconfident decisions and poor uncertainty estimation in the aggregated global model. This also underscores the necessity of incorporating Bayesian inference into FL. As further illustrated in Table 3, the posterior-based approach demonstrates a significantly better performance in uncertainty estimation as compared to the point-based approach. Besides, following this suggestion, we will also include the shaded error bars representing variance in all plots of Fig. 3 in the final version of our manuscript. **Comment 2-Performance differences in Table 3:** Following this suggestion, we will include the variance of the ECE values in Table 3, as follows. **Table 1: ECE (with the variance of the ECE values) under various setting.** | Method | **CIFAR-10** | | **CIFAR-10** | | |----------|-------------|-------------|-------------|-------------| | |Dir (0.2)-L | Dir (0.7)-L | Dir (0.2)-L | Dir (0.7)-L | | FedAvg | 0.169 ± 0.0039 | 0.165 ± 0.0042 | 0.429 ± 0.0030 | 0.432 ± 0.0028 | | FedM | 0.159 ± 0.0025 | 0.169 ± 0.0036 | 0.468 ± 0.0027 | 0.459 ± 0.0035 | | MimeLite | 0.182 ± 0.0041 | 0.178 ± 0.0034 | 0.461 ± 0.0029 | 0.470 ± 0.0022 | | SCAFFOLD | 0.192 ± 0.0018 | 0.194 ± 0.0025 | 0.472 ± 0.0016 | 0.479 ± 0.0034 | | FedBE | 0.182 ± 0.0029 | 0.189 ± 0.0032 | 0.440 ± 0.0015 | 0.463 ± 0.0021 | | FedPA | 0.173 ± 0.0031 | 0.176 ± 0.0020 | 0.374 ± 0.0033 | 0.371 ± 0.0025 | | FedEP | 0.121 ± 0.0033 | **0.118 ± 0.0021** | 0.289 ± 0.0045 | 0.273 ± 0.0027 | | FALD | 0.135 ± 0.0028 | 0.127 ± 0.0023 | 0.267 ± 0.0018 | 0.269 ± 0.0023 | | FedMDMI | **0.115 ± 0.0019** | 0.120 ± 0.0031 | **0.261 ± 0.0023** | **0.263 ± 0.0029** | **Comment 3- Degree of heterogeneity and effectiveness:** First, we would like to clarify that most of the Bayesian-based comparison methods, including FedBE, FedPA, and FedEP, are specifically designed to address the data heterogeneity issue in FL. We apologize for not clarifying this point earlier due to the word limit on the first-round response. Besides, following the reviewer's suggestion, we have also conducted additional experiments (iid and Dir (0.1)), observing that as the degree of heterogeneity increases, the generalization performance of all the comparison algorithms declines. However, the generalization performance of our FedMDMI decreases to a lesser extent in the most cases. This indicates that our FedMDMI is more robust to the increasing heterogeneity as compared to the other baselines. Since we cannot display images here, we present these results in the following table form for now, and we will convert it to line plots in the final version of our manuscript. We have not yet reached a conclusion on how data heterogeneity affects the ECE. It is clear that uncertainty estimation is influenced by local sample size, and smaller sample sizes tend to lead to overconfident decisions. On the other hand, data heterogeneity has a more pronounced effect on accuracy. **Table 2: Test Accuracy (\%) v.s. Data heterogeneity.** | Method | **CIFAR10** | | | | **CIFAR100** | | | | |---------|----------------|----------------------|----------------------|----------------------|-----------------|-----------------------|-----------------------|-----------------------| | | iid-H (CIFAR10) | Dir (0.7)-H (CIFAR10) | Dir (0.2)-H (CIFAR10) | Dir (0.1)-H (CIFAR10) | iid-H (CIFAR100) | Dir (0.7)-H (CIFAR100) | Dir (0.2)-H (CIFAR100) | Dir (0.1)-H (CIFAR100) | | FedAvg | 83.22 | 80.31 | 79.68 | 77.96 | 48.65 | 42.35 | 40.32 | 38.14 | | FedBE | 84.06 | 82.33 | 81.25 | 79.19 | 51.68 | 46.29 | 44.82 | 42.06 | | FedPA | 84.82 | 82.93 | 82.78 | 80.37 | 52.44 | 49.66 | 48.51 | 46.79 | | FedEP | 84.93 | **83.79** | 83.30 | 81.23 | 52.99 | 50.02 | 49.08 | 47.32 | | FedMDMI | **85.05** | 83.76 | **83.56** | **82.28** | **53.28** | **50.71** | **49.70** | **48.41** | --- Rebuttal 3: Title: Official Comment by Authors: Comment: **Table 3: ECE v.s. Data heterogeneity.** | Method | **CIFAR10** | | | | **CIFAR100** | | | | |---------|----------------|----------------------|----------------------|----------------------|-----------------|-----------------------|-----------------------|-----------------------| | | iid-H (CIFAR10) | Dir (0.7)-H (CIFAR10) | Dir (0.2)-H (CIFAR10) | Dir (0.1)-H (CIFAR10) | iid-H (CIFAR100) | Dir (0.7)-H (CIFAR100) | Dir (0.2)-H (CIFAR100) | Dir (0.1)-H (CIFAR100) | | FedAvg | 0.174 | 0.170 | 0.168 | 0.160 | 0.437 | 0.440 | 0.434 | 0.430 | | FedBE | 0.181 | 0.184 | 0.187 | 0.169 | 0.452 | 0.467 | 0.438 | 0.442 | | FedPA | 0.179 | 0.174 | 0.167 | 0.168 | 0.387 | 0.380 | 0.382 | 0.361 | | FedEP | 0.137 | 0.124 | 0.130 | 0.125 | 0.281 | 0.284 | 0.298 | 0.277 | | FedMDMI | **0.133** | **0.122** | **0.125** | **0.117** | **0.272** | **0.262** | **0.267** | **0.259** | **Comment 1-Complexity analysis:** We did provide the complexity analysis in terms of the computation and memory for a single communication round. To calculate the total time consumption required to reach the convergence, we then have to multiply the computation time per round with the total number of communication rounds $T$ needed for convergence. In the following, we begin by providing a theoretical analysis of the total number of communication rounds required for convergence. **_i) Theoretical convergence analysis._** For our proposed **FedMDMI**, as shown in Lines 230-232 of our manuscript, and leveraging the insights from [A3], we provide the convergence rate of our **FedMDMI** under the $\mu$-strongly convex objective as: $\mathcal{O}\left((1-\gamma \mu /8)^{KT} + {1/m} + d\right)$, where $m$ is the number of clients, $K$ is the number of local updates, $d$ is dimensions of the model, and $T$ is the number of communication rounds. Thus, the convergence rate scales linearly with the model dimension $d$. Reference [A4] shows that the convergence rate of **FedAvg** on the $\mu$-strongly convex objective is: $\mathcal{O}\left(\mu\exp({-\frac{\mu}{16(1+B^2)}T}) +{1/(mKT)}\right)$, where $B$ is the gradient dissimilarity. Notably, neither **FedPA** nor **FedEP** presented the convergence rate in their original papers. **_ii) Empirical convergence behavior._** From the convergence curves illustrated in our manuscript (i.e., Figs. 3(c) and 3(d)), it can be empirically observed that the number of communication rounds required for convergence of our **FedMDMI** is less than that of **FedPA** and **FedEP**. Then, we try to respond to the reviewer's concern on variational approximation (VA) methods being generally faster than sampling-based approaches in real-world problems. It is however observed in our rebuttal that FedEP, which uses the expectation propagation to obtain a VA of the global posterior, consumes more time per communication round and requires a greater number of communication rounds compared to our FedMDMI. The reasons are explained in the following. **_i) Complexity comparison per communication round._** As stated in our first-round response to the common comment 1, **FedMDMI** only requires generating and computing the Gaussian noise at each round, while the sequence of global model $w_t$ converges to Gibbs posterior with sufficiently large $t$. In contrast, **FedEP** requires approximating the covariance as the inverse Hessian at each round, which introduces an additional $\mathcal{O}(d^3)$ time complexity and $\mathcal{O}(d^2)$ memory complexity. **_ii) Total communication rounds._** In fact, convergence analysis for the federated variational approximation is still an open question, with no established results at present. So we try to give an intuitive explanation on why VA may converge slower than MCMC in FL. While VA may outperform sampling-based methods in terms of the convergence in centralized learning or real-world cases, the situation is different in FL. The main drawback of VA is that it typically results in biased posterior estimates for complex posterior distributions. This issue is further exacerbated in FL, where data heterogeneity across clients already contributes to biased local posteriors, making the use of VA even more challenging. Consequently, using VA in this context can result in an unstable and slow convergence of the aggregated global posterior. --- Rebuttal Comment 3.1: Title: Official Comment by Authors: Comment: [A3] Plassier _et al._, "Federated averaging langevin dynamics: Toward a unified theory and new algorithms," in _Proc. AISTATS_, 2023. [A4] Karimireddy, Sai Praneeth _et al._, "SCAFFOLD: Stochastic controlled averaging for federated learning," in _Proc. ICML_, 2020. **Comment 3-Comparison of MCMC and Variational Approximation (VA):** Within the comparison methods, FedEP does utilize the variational approximation to obtain the global posterior. However, determining whether MCMC or VA is more suitable for federated posterior inference is in general a complex problem, as their effectiveness depends on the specific context. Due to the data heterogeneity issue, regularization at the client level is often necessary to mitigate bias in the local posteriors. Our FedMDMI employs the model-data mutual information regularization alongside the MCMC for posterior inference. In contrast, FedEP uses the posterior from the previous round to regularize the current local posterior. However, in some practical federated scenarios with low client participation rates, this retained local posterior in FedEP can become very stale, thus reducing its effectiveness in addressing the data heterogeneity issue. This also contributes to the less stable and slower convergence observed for FedEP compared to our FedMDMI.
Rebuttal 1: Rebuttal: Thanks for the comments. Below is our response to common cocerns, with new tables in the attached PDF. **Comment 1-Complexity analysis :** We provide complexity analysis w.r.t. time, storage, and communication of different methods. We further empirically examine the relationship between computation time, data heterogeneity, and model dimension within a communication round. 1) We begin by defining the dimensions of neural network as $d$. At each round, we adopt SGLD (Stochastic Gradient Langevin Dynamics) to estimate the local posterior. Specifically, compared to SGD employed in **FedAvg** and **FedBE**, **our FedMDMI** entails an additional step of generating Gaussian noise with $d$ dimensions, which is subsequently incorporated into each model update iteration. This causes additional $\mathcal{O}(d)$ time and $\mathcal{O}(d)$ memory. In contrast, at each client, **FedPA** uses dynamic programming to approximate the inverse matrix $d \times d$ of neural network, incurring additional $\mathcal{O}(l^2d)$ time and $\mathcal{O}(ld)$ memory, where $l$ is number of posterior samples. Similarly, **FedEP** also requires approximating the covariance as the inverse Hessian, introducing additional $\mathcal{O}(d^3)$ time and $\mathcal{O}(d^2)$ memory. For communication and aggregation at server, our **FedMDMI**, along with **FedAvg** and **FedPA**, requires $\mathcal{O}(md)$ time and $\mathcal{O}(md)$ memory, where $m$ denotes number of clients. In contrast, **FedEP** requires $\mathcal{O}(md)$ time and $\mathcal{O}(md^2)$ memory. **FedBE** not only requires $\mathcal{O}((m+1+l)d)$ time and $\mathcal{O}((m+1+l)d)$ memory, where $l$ denotes number of global model samples, but also performs knowledge distillation at server using unlabeled data, which is both memory- and time-intensive. 2) We also conduct experiments to evaluate running time of each communication round for these algorithms. Taking CIFAR100 dataset with participation rate $\frac{p}{m}=0.1$ as example, the average time to execute a communication round for LeNet and ResNet-18 with GEFORCE GTX 1080 Ti is: **LeNet**: FedAvg: 5.29 s FedPA: 6.94 s FedEP: 11.25 s FedBE: 17.36 s FedMDMI: 5.36 s **ResNet-18**: FedAvg: 13.05 s FedPA: 16.87 s FedEP: 25.10 s FedBE: 43.06 s FedMDMI: 13.88 s This shows that time consumed by our FedMDMI per communication round does not exhibit significant difference than FedAvg. The time required by FedBE is much larger than other methods. As network size increases, the time taken by all baselines also increases, aligning with our intuition. Finally, data heterogeneity does not affect local computation time when numbers of local samples and updates are fixed. **Comment 2-Additional Baeslines:** We will include these references in the revised manuscript. The survey provides an overview of Bayesian federated learning (BFL). For example, FedHB proposes a hierarchical BFL approach, using hierarchical Bayesian modeling to describe the generative process of clients' local data with local models governed by a higher-level global variate. FedSI proposes a personalized BFL, performing posterior inference over an individual subset of network parameters for each client, while keeping other parameters deterministic. Inspired by FedRep and FedSI, we apply our algorithm to personalized Bayesian FL, named PerFed-MDMI. FedRep learns a shared data representation across clients and unique local heads for each client to fulfill their personal objectives. FedSI further updates the distributions of model parameters over the representation layers and sends these updated distributions to the server for global aggregation in training. Model parameters of the decision layers are then fine-tuned during evaluation phase. For subnetwork (i.e., representation layers) inference, instead of using Linearized Laplace Approximation from FedSI, PerFed-MDMI employs our model-data mutual information regularization. Also, since our posterior inference method involves no covariance matrix estimation, we do not consider selecting a smaller subnetwork to reduce computation and storage overhead. Empirically, we first evaluate our method against FedProx and FedHB on traditional FL task, named global prediction. Table 9 shows that our method outperforms FedProx and FedHB. For personalized FL task, we compare our newly designed method, PerFed-MDMI, with FedAvg and FedRep, presenting improvement in Table 10, where all data at each client is split into 70\% training set and 30\% test set. This indicates that our MI regularization is compatible with FedRep and FedSI, effectively promoting personalized FL. Additionally, since our focus is on training a global posterior rather than the personalized posteriors, we did not directly compare the accuracy with FedSI, FedPop, and other baselines in personalized FL due to the time constraint. **Comment 3-Comparison of MCMC and Variational Approximation (VA):** We use SGLD instead of VA for three reasons. 1) Optimal posterior obtained in our FedMDMI is the Gibbs posterior, and SGLD has been proven efficient and effective in large-scale Gibbs posterior inference. 2) As shown in Line 212, there has also been pioneering work on obtaining a Gibbs posterior through VA. But this method results in at least doubling the communication overhead due to transmission of both the mean and covariance matrices. Another primary drawback of VA is its tendency to yield biased posterior estimates for complex posterior distributions. This issue is further compounded in FL, where data heterogeneity across clients already contributes to biased local posteriors, and this bias may be exacerbated by VA. 3) As shown in Figs. 3(c)(d) and 4(d), convergence rate of our FedMDMI may not be the fastest, especially when compared to the optimization-based method, SCAFFOLD. But it still slightly outperforms Bayesian-based baselines in most cases, showing that SGLD in FL is efficient and does not significantly reduce the convergence rate. Pdf: /pdf/6c1c0124b7638943518e7a7d203da57114265b5e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Visual Scene Understanding: Incremental Scene Graph Generation
Accept (poster)
Summary: This paper proposes a new task, Continual Scene Graph Generation (CSEGG), where SGG models must dynamically update to recognize new objects and relationships. It introduces a benchmark with three learning regimes: relationship incremental, scene incremental, and relationship generalization. In addition, the paper proposes a learning method called Replays via Analysis by Synthesis (RAS), which utilizes textual triplets to synthesize new images for model training. Performance evaluation on a dataset shows the effectiveness. Strengths: 1. The paper is clear and easy-to-understand. 2. The effectiveness of key components has been verified. Weaknesses: 1. The significance of CSEGG is limited. In CSEGG, the model should be trained for new categories. However, such training processes are unstable, considering the data, hyperparameters, and hardware requirements. What are the advantages of CSEGG compared to open-vocabulary SGG [a] and zero-shot SGG [b]? 2. In RAS, the training targets are predicted by the previous SGG model, which may cause error propagation. Why not use a generative model [c] with fine-grained control signals (e.g., given boxes and captions)? 3. The two-stage SGG baseline (IMP, CVPR'17) is very old. Why not use the latest models (e.g., PE-Net [d])? 4. The model is evaluated on VG only. More experiments are needed to verify the generality, e.g., cross-domain evaluation. [a] Expanding Scene Graph Boundaries: Fully Open-vocabulary Scene Graph Generation via Visual-Concept Alignment and Retention. arXiv, 2023. [b] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models. NeurIPS, 2023. [c] Open-Set Grounded Text-to-Image Generation. CVPR, 2023. [d] Prototype-based embedding network for scene graph generation. CVPR, 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What does the "long-tailed distribution" in L43 refer to? long-tailed predicate distribution or long-tailed object distribution? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors discussed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[XGhv.Weakness.1 - CSEGG is limited]** We would like to address the concerns regarding the difference of CSEGG compared to open-vocabulary SGG [a] and zero-shot SGG [b]. First, we respectfully disagree with the reviewer that the significance of CSEGG is limited. The problem setting in open-vocabulary SGG [a] and zero-shot SGG [b] differs fundamentally from the problem setting addressed by CSEGG. In CSEGG, we aim to simulate scenarios where the “new” predicates or “new” objects encountered by the model are completely novel and have not been seen by any models before including the existing language models or models with multiple modalities. For example, if a new drug called XYZ is discovered, this drug has not been introduced to any AI models, whether they are language, vision, or multi-modal models. As a result, all the zero-shot and open-vocabulary models referenced in the two citations would fail to identify this new drug, as there is NO prior knowledge about this drug to transfer knowledge at the first place. Second, open-vocabulary SGG [a] uses a frozen text encoder with pre-learned weights, assuming prior knowledge of objects or predicates. Similarly, zero-shot SGG [b] relies on large language models for description-based prompts, assuming these models have encountered the new objects or predicates. In contrast, CSEGG simulates scenarios where completely novel objects or predicates are introduced. Relying on pre-learned information from large language models or frozen text encoders would violate the continual learning setting. Both [a] and [b] are fantastic works. We will cite them and discuss the differences of these works from our work in the related work section. However, our work focuses on continual learning in SGG, which is NOT addressed by these methods as it is a different problem setting altogether. **[XGhv.Weakness.2 - Fine-grained controls for RAS]** We appreciate the reviewer’s suggestion. However, we would like to point out that even without fine-grained controls as suggested in [c], our approach has already demonstrated decent performance outperforming all the existing continual learning baselines. As we are the first to address continual learning in SGG, we introduced a proof-of-concept model called RAS. Our main goal was to show that generative modeling can prevent catastrophic forgetting in CSEGG. We agree that using generative models with fine-grained control signals like boxes and captions could improve our method. We will cite the suggested work in the related works section and incorporate it in our model in the final version. **[XGhv.Weakness.3 - Limited Choice of SGG Models]** We understand the reviewer's concern that the two-stage SGG baseline (IMP, CVPR'17) is old. However, with 1359 citations, it remains a pioneering model in SGG and a strong baseline for comparison in subsequent works. We included it to show that our method works on classical SGG models and that the CSEGG benchmark can be extended to other SGG models. Additionally, we included a one-stage baseline (SGTR, CVPR'22) to validate our benchmark with more recent models. We agree that PE-Net [d] is a very interesting work, and we plan to try and include this in the CSEGG benchmark for the final version. We included only one model from each category (one-stage and two-stage) due to hardware and time constraints. For Learning Scenario 1 (S1), we conducted 120 experiments per model, including 5 tasks for each of the 8 baselines (excluding joint models), with 3 runs per baseline for statistical significance. For Learning Scenario 2 (S2), we performed 48 experiments per model, involving 2 tasks for each of the 8 baselines, also with 3 runs per baseline. This totals 168 experiments per SGG model, and with 2 models, we conducted 336 experiments. Each experiment takes about 2 days on a single machine with 4 A5000 GPUs. Using 3 machines, training for S1 and S2 alone took approximately 7 months. Including Learning Scenario 3 (S3) and additional ablation studies, the entire set of experiments extended to around 9-10 months. Thus, we included only one model from each category (one-stage and two-stage) in our study. We invite the community to contribute by incorporating more SGG backbones. **[XGhv.Weakness.4 - No generality]** We agree with the reviewer that studying model generalization in scene graph generation is interesting. This includes training a SGG model in VG dataset and testing on other SGG datasets, such as OpenImages. However, we argue that model generalization and continual learning are completely different problems. In our work, we focus on continual learning. Ideally, we would like to include other SGG datasets such as OpenImages, for our Continual SGG problem. However, our current analysis is limited to the Visual Genome (VG) dataset due to hardware restrictions and time constraints as mentioned in **[XGhv.Weakness.3]**. We appreciate the list of references related to open-vocabulary, open-set, and zero-shot problems in SGG. However, we would like to emphasize again that none of these works look into the problem of continual learning, which is the main focus of our work. We will cite these papers and discuss the differences from our work in the related work section. As mentioned in **[XGhv.Weakness.1]**, our work studies continual learning problems in SGG, which is valuable, unique, and challenging. It is different from open-vocabulary, open-set, and zero-shot SGG. **[XGhv.Question.1 - "long-tailed distribution" in L43]** We refer “long-tailed distribution” to distributions in both objects and relationships; and hence, predicates altogether. We will clarify this statement in the final version. We present histograms of object and relationship distributions in each task of each learning scenario in Fig. 5 in the Appendix. The histograms show that the long-tailed distribution of objects and relationships (predicates) persists across all tasks and scenarios. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I read the other reviewers' comments and the rebuttal. Overall, I am inclined to reject this paper. I want to mention that, the description-based paradigm does NOT assume the model has encountered new objects or predicates. LLM can generate descriptions for any object/predicate categories, and the generated descriptions can be used by the text encoder of VLMs. See [a,b] for more details. [a] Visual Classification via Description from Large Language Models, ICLR 2022. [b] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models. NeurIPS, 2023. --- Reply to Comment 1.1.1: Title: Language models are NOT equipped with continual learning ability without forgetting; zero-shot is NOT equal to continual learning Comment: We thank the reviewer for the response. However, we respectfully disagree with the reviewer that our work lacks contributions because we did not use language models to tackle the problem of zero-shot. We are interested in continual learning problems in scene understanding in vision. Just like humans, we encounter new objects/relationships/contextual knowledge from vision. We learn them without forgetting the old knowledge. TLDR: Language models are NOT equipped with continual learning ability **without forgetting**; zero-shot problem is NOT equal to continual learning problem; existing zero-shot methods in the reference list do NOT address the problem of forgetting in continual learning Example: a pre-trained language model only knows to recognize <apples on the tables>. How does this language model learn to recognize <elephants In the jungle> without forgetting <apples on the tables>. We appreciate it if the reviewer can point out the existing work in continual learning to tackle this problem, i.e. learn new objects and new relationships without forgetting old objects and old relationships. ==== Language models for zero-shot learning are generally NOT equipped to handle the challenges of continual learning. The primary reason is catastrophic forgetting. When a language model is updated or fine-tuned on new data, it often overwrites its existing knowledge, leading to a significant drop in performance on previously learned tasks. This is because traditional language models lack mechanisms to preserve old knowledge while integrating new information. Hence, current language models cannot help us eliminate forgetting problems in vision. In contrast, continual learning models are specifically designed to address this issue, often using techniques such as memory replay, regularization methods, or architectural modifications to retain past knowledge while learning from new data. So, while zero-shot learning allows for impressive generalization to unseen tasks, it does not address the ongoing challenge of learning from a continuous stream of data without forgetting. This is a fundamental limitation when applying zero-shot models to real-world scenarios requiring continual adaptation without forgetting. We will cite all the recommended papers in zero-shot literature and highlight the key differences between zero-shot and continual learning.
Summary: The paper introduces the problem set up of continual learning for image scene graph generation. To this end, they reorganize existing SGG datasets to establish this new benchmarks on three learning scenarios. Next, they present a "Replays via Analysis by Synthesis" (RAS) for generate diverse scene structure, followed by synthesizing scene graphs for replays. Strengths: The scenarios and benchmarks experiments using both transformer-based and CNN-based backbones are exhaustive. The proposed RAS outperforms baselines for two scenarios. RAS parses previous task scene-graphs into triplet labels for diverse in-context scene-graph reconstruction. RAS then synthesizes images with Stable Diffusion models for replays from the above re-compositional graphs. Weaknesses: For the continual learning setup, I'd imagine a video/dynamic scene graph generation will be more useful. However, currently it's focused on image scene graphs alone. Technical Quality: 3 Clarity: 3 Questions for Authors: What do the authors think about creating a similar benchmark for dynamic scene-graphs such as Action Genome or others? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Apart from simple geometric/locations, most important visual relations are associated with action, and those actions evolve over time. A good examples of such dynamic scene graphs would be action genome, HOMAGE similar, and more recently EASG. e.g. a single image may not be enough to understand if a person is getting down or getting up for riding a horse. I'd like to understand the scenario for the continual learning setup can work for video/dynamic scene graphs. [1] Ji. et al, Action Genome: Actions as Compositions of Spatio-temporal Scene Graphs, CVPR 2022 [2] Rai et al, "HomeAction Genome: Cooperative Compositional Action Understanding", CVPR 2021 [3] Rodin, Furnari, Min et al, "Action Scene Graphs for Long-Form Understanding of Egocentric Videos", CVPR 2024 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[myHM.Weakness.1 - Focus on Image SGG explanation]** We agree with the reviewer that advancing continual learning in Scene Graph Generation (SGG) to include dynamic or video SGG is a natural and beneficial progression, as it more closely aligns with real-world settings. However, this research topic on continual learning in SGG is relatively unexplored, as no one has studied it before. Therefore, we need to begin with basic and straightforward settings, i.e. continual learning in scene graph generations on static images. Here, we aimed to establish a comprehensive benchmark of baselines and datasets to provide a foundation for the research community to start developing more advanced continual learning methods for SGG on static images. With all these foundations developed in this work, we can then tackle more complex aspects, such as dynamic video SGG in the next phase. **[myHM.Questions.1 - Opinion on Dynamic SGG Benchmark]** We agree that creating a similar benchmark for dynamic scene-graphs is an excellent idea, and we definitely plan to extend our benchmark dataset and to incorporate dynamic scene-graph models. As mentioned in our previous response, we are the first team to address SGG in a continual learning setting, and our initial goal was to begin with basic and straightforward settings on static images. Only after this, we can move on to incorporate dynamic SGG models into our benchmark. Thank you for bringing Action Genome to our attention; it has inspired us for future developments in our benchmark. We will certainly cite this work in our final version. **[myHM.Limitations.1 - Continual learning for dynamic SGG]** We agree with the reviewer that advancing continual learning in Scene Graph Generation (SGG) to include dynamic or video SGG is a natural and beneficial progression, as it more closely aligns with real-world settings. As mentioned earlier, we are the first team to address SGG in a challenging continual learning setting, and our initial goal was to establish a comprehensive benchmark for image-level SGG models. Only from here, we can then move on to develop continual learning SGG in dynamic and video settings. We thank the reviewer for the clear directions in continual learning for dynamic scene graphs. In response, we will include a paragraph in the related works section to survey all the dynamic scene graph generation works, such as the three works here: Action Genome, HOMAGE, and EASG. We will cite these papers in the related works. However, do note that all these three works do not study continual learning problems. Neither do these works look into the importance of studying continual learning in SGG in a dynamic setting. Expanding our benchmark to incorporate continual learning in dynamic SGG models will be a key focus of our future work. Thank you for the valuable suggestions and for highlighting important works in this area.
Summary: This paper proposes a benchmark and a framework for incremental scene graph generation using continual learning. They curated the benchmark over an existing benchmark SGG dataset, VG. They proposed three learning scenarios such as relationship incremental, scene incremental and generalization on relationship. Each of these learning regimes has different continual learning challenges such as learning new relationships, learning both new objects and relationship etc. The authors proposed a generative replay method which can deal with the catastrophic forgetting of continual learning. They reported detailed experimental results along with ablation studies. Strengths: 1. Applying continual learning to scene graph generation could greatly benefit tasks like robotic navigation etc. where the agent need to adapt in new scenerios with new objects and relationships 3. The data and codes of the experiment will be publicly available Weaknesses: 1. While the paper proposes a benchmark and framework for continual learning for SGG with detailed experiemtns and evaluation, the presentation of the paper is very difficult to follow. 2. The problem formulation and formal mathematical definition of the overall methods are missing for the most of the parts. Section 3 has some definitions at the beginning and then explained learning scenarios and competitive baselines followed by the explanation of the evaluation metrics. Section 4 explains the continual learning framework 'Replays vai Analysis by Synthesis', but how this framework work in conjuction of SGG backbones (one-stage and two-stage existing SGG approach) needs to be clarified and formulated in formal equations. 3. For Gen Rbbox@K metrics, could you please share some details on how you calculated the metric without the labels and how you ran the inference for predicting the relationship? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. For Gen Rbbox@K metrics, could you please share some details on how you calculated the metric without the labels and how you ran the inference for predicting the relationship? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: yes, the authors has addressed the limitations and potential negative societal impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Y6eG.Weakness.1-Poor paper presentation]** We appreciate the reviewer’s feedback regarding the presentation of our paper. We would like to note that other reviewers did not mention any issues with the clarity or presentation of our work. However, we value your input and it would be great if you could specify which parts of the paper were difficult to follow. This would help us make the necessary revisions and improvements for the final version. Thank you for your constructive feedback. **[Y6eG.Weakness.2-Lack of Mathematical Formulation]** For each scenario in CSEGG, we have task-specific dataset $D_t \$, where $t = 1, 2, 3, 4, 5$ for Learning Scenario 1 (S1), $t = 1, 2 $ for Learning Scenario 2 (S2), and $ t = 1, 2, 3, 4 $ for Learning Scenario 3 (S3). At task $ t $, we take the previously trained model $ M_{t-1} $ (with weights $ W_{t-1} $) and train it on the dataset for task $ t $, $ D_t $, to obtain weights $ W_t $ and model $ M_t $. If using an exemplar-based approach (Replays and RAS), the model is also trained on $ E_t $, where $ E_t $ is the exemplar for task $ t $. Note that $ E_1 $ doesn’t exist as no exemplar is needed for task 1 in all scenarios. For replay-based methods, we create exemplar $ E_t $ after training of task $ t-1 $ by storing data points from $ D_{t-1} $. Therefore, after training of task $ t $, we create $ E_{t+1} $ from $ D_t $. At task $ t+1 $, we obtain $ M_{t+1} $ by training $ M_t $ (with weights $ W_t $) on $ E_{t+1} $ and $ D_{t+1} $. For RAS, we create $ E_{t+1} $ as shown in Fig 3 in the manuscript. Following that, at task $ t+1 $, we obtain $ M_{t+1} $ by training $ M_t $ (with weights $ W_t $) on $ E_{t+1} $ and $ D_{t+1} $. If we are using EWC, we use $ W_t $ and $ M_t $ to calculate $ F_{t+1} $, where $ F $ is the Fisher information matrix using the equation, $ F_{t+1} = -E[\frac{\partial^2 }{\partial W_{t}^2}log(M_t(x)|W_t)] $, where $ M_t $ is the model of task t and $ W_t $ is the weights of task $ t $. Now during training of task $ t+1 $, we add $ L_{EWC} $ to the training loss. We calculate $ L_{EWC} $ using the equation, $ L_{EWC} = \sum_{}^{}F_{t+1}(W_{t+1} - W_{t})^2 $. While training for task $ t+1 $ we modify the loss $ L_{t+1} = L_{train} + L_{EWC} $. If we are using PackNet, after training of task $ t $ we take $ W_t $ and apply a pruning algorithm to obtain $ (W_t)’ $, where $ (W_t)’ $ are the pruned weights. At task $ t+1 $, we obtain $ M_{t+1} $ by training $ M_t $ with $ (W_t)’ $ and training on $ D_{t+1} $. We are also attaching a **Figure R1**, in the rebuttal pdf to show the entire schematic of CSEGG. **[Y6eG.Weakness.3-Gen Rbbox@K explanation]** Gen Rbbox@K is specifically defined for Learning Scenario 3 (S3) to evaluate the CSEGG model’s performance in locating bounding box locations. As described in Section 3.1, S3 consists of 4 tasks, with each task incrementally introducing 30 new objects. The relationship classes remain consistent across all tasks. In S3, there is a standalone test set where novel objects, absent from any training set of S3, appear with the same set of common relationship classes used in training. During evaluation, to calculate Gen Rbbox@K, for each predicted bounding box, we calculate the Intersection over Union (IoU) between the predicted bounding box and the ground truth boxes. If the IoU exceeds the predetermined threshold, it is considered a true positive (TP). We used IoU thresholds of 0.3, 0.5, and 0.7 for our experiments. If the IoU exceeds the threshold multiple times for the same predicted bounding box, we consider it a single positive prediction. This is because the metric aims to define the model’s performance in locating bounding boxes of objects that have not been learned during training. Counting multiple times for the same box would be misleading, as it would inflate the number of TPs and recall, while the actual number of unknown bounding boxes the model can generate might be low. The total possible positives (TP + FN) are determined by the total number of ground truth boxes in the image. This metric helps evaluate how well the CSEGG model locates unknown objects within an image. True positives (TP) represent successful identification of the location of an unknown object, while (TP + FN) represents all possible unknown objects the CSEGG model could locate. Thus, object classification labels are not required to calculate Gen Rbbox@K. Instead, we need the total number of ground truth bounding boxes in an image and the number of predicted boxes that meet the IoU threshold as explained in Section 3.3. **[Y6eG.Question.1- Gen Rbbox@K explanation]** Same as **[Y6eG.Weakness.3]** --- Rebuttal Comment 1.1: Title: Paper presentation and metric explaination Comment: Dear reviewer, We wonder whether our comments have addressed your concern about the paper presentation and the clarification on the evaluation metric Gen Rbbox@K. We are open to further discussions if needed.
null
null
Rebuttal 1: Rebuttal: We appreciate all the reviewers' feedback. We encourage reviewers to refer to the PDF file containing additional figures. To differentiate these new figures in the rebuttal from those in the main text, we have prefixed them with "R" in the rebuttal. For example, Fig R1 corresponds to Fig 1 in the rebuttal PDF. We've included a point-by-point response for each of the three reviewers. Pdf: /pdf/e101b8d6086cb65208c60dd038017b5961011138.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fisher Flow Matching for Generative Modeling over Discrete Data
Accept (poster)
Summary: The paper aims to build a Flow Matching [1] based generative model for discrete data. The approach models the discrete data as a categorical distribution that resides on the simplex, thereby translating the problem into continuous flows. Equipped with the Fisher-Rao metric, the simplex is identified as a Riemannian manifold, and the task is reduced to Riemannian Flow Matching [2]. To further simplify the geometry of the space, the sphere map is used, which identifies the simplex with the positive orthant of a hypersphere. Finally, the conditional path between every pair $(x_0, x_1)$ is defined via the geodesic interpolant induced by the FIsher-Rao metric. For marginalization, two joint distributions are considered: 1. $x_0$ and $x_1$ are independent, i.e., $\pi(x_0,x_1) = p_0(x_0)p_1(x_1)$ as in [2]. 2. $\pi(x_0,x_1)$ is the optimal transport (OT) plan between $p_0$ and $p_1$, approximated with minibatch [3]. Strengths: 1. The paper is well-written, and the method is rigorously presented. 2. The author successfully frames the proposed method for generative models of discrete data in terms of well-established existing methods, which eases the understanding of the approach and potentially allows for a relatively low cost of implementation. 3. Aside from the computational cost of the OT plan, the method does not introduce additional computational costs compared to previous works. Weaknesses: 1. The use of Riemannian Optimal Transport with a mini batch is of low novelty. This has also been done in [3] and on euclidean space in [6,7]. 2. Two key contributions of the paper seems to be the change in geometry, that is the introduction of the Fisher-Rao metric and the sphere map. However the paper lacks a clear experimental demonstration of the portion these two components contribute to results. Specifically: 2.1. Regarding Figure 2, allegedly, the author claim that Fisher-Flow (FF)-OT on $\mathbb{S}_+^2$ (e) performs best, however it is not clear, one may even say that FF-OT on the $\Delta^2$ (c) perform better. Furthermore, even without FF-OT it is not clear that using the sphere map helps ( (b) vs (d)). 2.2. In Figure 3.a the author compares FF simplex, FF-OT simplex, FF sphere, and FF-OT sphere, that shows that FF-OT sphere preforms the best. It is also apparent that with increasing dimension size the advantage of the use of the OT plan decreases. And Figure 3.b shows that FF-OT sphere out performs [4]. However, comparing Figure 3.a and 3.b it seems that the other three, i.e., FF simplex, FF-OT simplex, FF sphere are actually outperformed by [4] which uses flows on the simplex but no OT plan. To conclude, this experiment suggests that the apparent advantage of FF-OT sphere on [4] may exist only for relatively low dimensions. 2.3. The other two experiments only show FF-OT sphere. 2.4. No ablation that focus on the use of Fisher-Rao metric compared to other potential metrics (though the author does provide a theoretical justification). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Can the author expand on the difference of his use in Riemannian Optimal Transport with a mini batch compared to previous works? 2. The author claims that [5] is restricted in the choice of possible source distributions. Can the author please elaborate on that? 3. Could the author possibly provide more experimental evidence to demonstrate that the proposed change in geometry indeed contributes to a considerable improvement and that the gain in results is not mainly from the OT plan? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: It is suspected that the proposed method might not be as effective in large dimensions (i.e., large vocabulary size ) compared to other methods. [1] Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., & Le, M. (2023). Flow Matching for Generative Modeling. arXiv preprint arXiv:2210.02747. [2] Chen, R. T. Q., & Lipman, Y. (2023). Flow Matching on General Geometries. arXiv preprint arXiv:2302.03660. [3] Bose, A. J., Akhound-Sadegh, T., Huguet, G., Fatras, K., Rector-Brooks, J., Liu, C.-H., Nica, A. C., Korablyov, M., Bronstein, M., & Tong, A. (2024). SE(3)-Stochastic Flow Matching for Protein Backbone Generation. arXiv preprint arXiv:2310.02391. [4] Stark, Hannes, et al. "Dirichlet flow matching with applications to dna sequence design." arXiv preprint arXiv:2402.05841 (2024). [5] Campbell, Andrew, et al. "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design." arXiv preprint arXiv:2402.04997 (2024). [6] Tong, Alexander, et al. "Improving and generalizing flow-based generative models with minibatch optimal transport." arXiv preprint arXiv:2302.00482 (2023). [7] Pooladian, Aram-Alexandre, et al. "Multisample flow matching: Straightening flows with minibatch couplings." arXiv preprint arXiv:2304.14772 (2023). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and detailed feedback. We focus here on addressing the key clarification points raised in this review while the global response contains new experiments on molecules and language modelling with additional baselines. ## Riemannian OT is of low-novelty We acknowledge the reviewer’s concern that OT for Riemannian manifolds has previously been considered in [1]. Unfortunately, the general theory of optimal transport on manifolds Villani 2003 [2] (Theorem 2.47) does not give a prescription for a fast computation implementation of OT on manifolds—i.e., analytic expressions for the geodesic cost $c(x_0, x_1)$. While [1] found a closed form solution to this cost for $SE(3)$, it cannot be generally extended to all Riemannian manifolds, since, for arbitrary manifolds, one must measure the geodesic by simulating the expensive Euler-Lagrange equations, which are 2nd order PDEs. Our work with Fisher-Flows pushes the advantage of Riemannian OT within generative modelling by showing that: 1.) The probability simplex and the sphere both have closed form expressions for the OT cost $c(x_0, x_1)$. 2.) We demonstrate that OT via Fisher-Flows can be used for discrete data. We highlight that this is the first use of OT for generative modelling over discrete data. 3.) As proved in Appendix D, OT over the probability simplex elicits new benefits, such as the fact that $p_t$ becomes the Wasserstein geodesic between $p_0$ and $p_1$, and the associated velocity field $v_t$ minimises the kinetic energy. These two points were not rigorously proven in [1]. ## Clearer experimental demonstration over the benefits of the Fisher-Rao metric and Sphere map ### OT in Fig 2 With regards to Fig. 2, we find that both c) and e) to be of comparable visual quality, and, more importantly, of a higher one than that of their non-OT counterparts. As this is a small toy synthetic density estimation task, the benefits of the Fisher-Rao metric and the added numerical benefits of the sphere map are less pronounced. ### Comparison of FR on Fig 3 Since the sphere is more numerically stable and, for practical problems—where the number of dimensions can greatly increase—we expect Fisher-Flows on the sphere to perform the best—regardless of OT. We see this in Fig 1 of the 1pg rebuttal PDF, Dirichlet Flow Matching is often worse than FF-no OT on the sphere as we increase the number of categories. With regards to OT, our experimental findings are in line with previous use of OT in generative models, where lowering variance in training and shorter paths during inference (fewer ODE integration steps [3,4]) are observed. ### The other two experiments only show FF-OT sphere. Our new experiments on molecule generation and language modelling (see 1pg rebuttal PDF) used Fisher-Flows on the sphere and without OT. In our language modelling results, we found that Fisher-Flow slightly outperforms discrete diffusion approaches (e.g., D3PM, SEDD) while marginally outperforming concurrent SOTA work on Masked Discrete Diffusion (MDLM [5]) released after the submission deadline. This suggests that geometry plays a key role in improvement as we compare against conventional discrete diffusion methods on higher dimensional problem setting ~800k categories (text). ### No ablation on other potential metrics. Thank you for this suggestion! We have included new ablations in our 1 pg rebuttal PDF that use the Euclidean metric (Linear Flow Matching) on the simplex for both the synthetic data and DNA Promoter and Enhancer experiments. We observe that the Fisher-Rao metric is still on par and better than Dirichlet FM, in terms of empirical performance. Finally, considering other metrics beyond the Fisher-Rao and Euclidean is an interesting thought; however, it is unclear whether it is possible to easily obtain closed-form geodesics, which are needed for simulation-free training as mandated in the flow matching framework. ## Q1. Use of Riemannian OT vs. Prior Work Please see our detailed response under the heading “Riemannian OT is of low-novelty” for a deeper discussion. ## Q2. Flexibility of source distribution in Campbell et. al 2024? We note that Campbell et. al 2024 explicitly state that the “conditional flows we use in this paper linearly interpolate towards $x_1$ from a uniform prior or an artificially introduced mask state, M.” They do not make any theoretical claims about the generality of their starting source distribution unlike we do for Fisher-Flows. While it is potentially possible to accommodate for a more flexible source distribution in their CTMC framework, it is not immediately clear how to do this easily. We will refine our discussion over Campbell et. al 2024 in our paper to highlight this aspect. ## Q3. Ablation on geometry vs. OT? Please see our previous response on Fisher-Flows without OT on language modelling. ## Closing comment We hope that our responses were sufficient in clarifying all the great questions asked by the reviewer. We thank the reviewer again for their time, and we politely encourage the reviewer to consider updating their score if they deem that our responses in this rebuttal along with the new experiments merit it. ## References [1] Bose et. al (2024). SE(3)-Stochastic Flow Matching for Protein Backbone Generation. arXiv preprint arXiv:2310.02391. [2] Villani. Optimal Transport: Old and New. Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2008. ISBN 9783540710509. [3] Tong et. al. Improving and generalizing flow-based generative models with minibatch optimal transport. arXiv preprint 2302.00482, 2023. [4] Klein et. al . "Equivariant flow matching." Advances in Neural Information Processing Systems 36 (2024). [5] Sahoo et al. "Simple and Effective Masked Diffusion Language Models." arXiv preprint arXiv:2406.07524 (2024). --- Rebuttal 2: Title: Reviewer response Comment: I want to thank the authors for their efforts in adding more experiments. First regarding the Riemannian OT, thank you for emphasizing the contributions upon previous works. I am now convinced that more credit should be attributed to the authors than I initially acknowledged. Second regarding the significance of change in geometry from simplex to sphere, Table 1 in pdf actually shows no advantage to the sphere and to my understanding Table 2 compares only to discrete diffusion methods (where I would expect comparison like Table 1 to support such a claim.) However, I do find the work in this paper interesting and well presented, I will increase my score.
Summary: This paper proposes a framework that enables flow matching over a d-dimensional simplex, by instantiating a riemannian flow matching algorithm using the Fisher-Rao metric. Some motivation in connecting the Fisher-Rao metric to natural gradient descent and Riemannian optimal transport is used to justify the choice of Fisher-Rao metric. Experimental validation is carried out on DNA sequences. Strengths: The proposed method is quite a natural application of riemmanian flow matching to distributions over simplices. Weaknesses: - I felt the writing is a bit too mathematically dense, and the justifications for the Fisher-Rao metric feel a bit circular (see below for questions on Propositions 1 & 2). - Additional ablation experiments on non-toy data would be good to have to verify some of the claims in the paper. - It's unclear to me (due to lack of expertise in the application area) to judge how well this approach performs. Technical Quality: 3 Clarity: 3 Questions for Authors: ### Regarding the mathematical justifications I find that Sections 3.3 and 3.4 are circular in their reasoning. In Section 3.3, it is stated "the choice of Fisher-Rao metric [is] the optimal one on the probability simplex". However, here "optimality" is defined in terms of the expected KL, W_KL. This is an arbitrary choice of optimality. There is obviously a correspondence between a distance function and a Riemannian metric, and any metric can be stated as the optimal one for some distance function. So: - Why is considering the W_KL ball in Eq 8 useful? Are there computational reasons to use this? Are there theoretical justifications over other distance functions? In Section 3.4, a similar problem arises. Here it is stated that the flows (1) lead to shorter global paths and (3) have lower kinetic energy. However, this is circular because the distance of a path (and consequently, the kinetic energy) is an implication of the chosen Riemannian metric g. Here the correspondence between distance function <==> Riemannian metric shows up again. - Why do we care about the lengths of paths defined by the Fisher-Rao metric? - Does the choice of Fisher-Rao metric give velocity fields that are faster to simulate? I'm not completely sure what the point of Proposition 2 is. Equation 10 looks to me like a definition of p_t given a coupling pi. That is, given that we've already solved the optimal transport coupling, we can then take any probability path p_t that transports between the marginals of pi, so Proposition 2 is just stating a definition of a particular choice of p_t. - Here do you mean to imply something about this choice of p_t? E.g., that it allows expressing W_2 in terms of p_t through a dynamical formulation, where the kinetic energy (as defined by the Fisher-Rao metric) shows up? ### Regarding empirical validation The paper makes the following claims: (1) flexibility of source distribution (2) the sphere map has better numerical stability properties (3) the choice of Fisher-Rao metric enables continuous reparameterisation and OT The experiments do not yet completely justify these points, from what I can tell. (1) There is no experiment that uses a flexible source distribution. - Is there a case for using something that isn't a Dirichlet distribution which can be handled by Dirichlet FM? Or at least, a justification over the uniform noise distribution? (2) It's unclear why the sphere map helps. Concretely, there is a division by p in the computation of the inner product (Eq 5). However, this inner product is unneeded for the geodesic computation, since we can easily solve the geodesic on the sphere and then map back to the simplex. So this inner product seems to show up only in Eq 7, the training objective. However, given that this metric only depends on x_t, and that the model conditions on x_t, the optimal solution of CFM (Eq 7) is unaffected. So one can easily use any other metric for CFM without affecting the optimal solution. - Have the authors tried just using another metric for CFM and training on the simplex representation? (3) Similarly, I think a good ablation to do is just to use the Euclidean metric here. One can easily use the interpolants defined by the Fisher-Rao metric instead of linear, but otherwise just use regular Euclidean flow matching. Otherwise, related to the first concern, it is unclear why the connections to kinetic energy (as in Riemannian optimal transport) is important. ### Clarify about experiment setup As someone not familiar with DNA sequences, the experiment section does not provide enough information for me to gauge the usefulness of these experiments. For instance, it should at least be stated what d is (I imagine d=4?). My subjective opinion is that many methods could work reasonable well with such a small d. It would be even better if the authors could comment on the real world implications of their improved metrics (how does PPL affect a scientist who would use this generative model?) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses a bit the limitations regarding higher d, but does not provide details on why the current approach cannot be applied. I am also curious why the authors did not consider relatively toy datasets (such as variants of MNIST) that would be more understandable / appealing for the general machine learning community. For instance, it is unclear to me if non-DNA experiments would be too hard, or even too easy, compared to the chosen experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort assessing our paper. We now address the key points in the review, while additional experiments are included in the global response and rebuttal PDF. ## Circular justifications for the Fisher Rao We understand that certain aspects of the theory could be explained more clearly. We now explain our reasoning step by step. First, note that a distance function on a Riemannian manifold is fully determined by the choice of metric, and that choosing a different distance function amounts to choosing a different metric. We require the following key desiderata from a metric to employ flow matching: - Analytic expressions of manifold operations like the exponential and logarithmic map. - Easy parameterisation of the tangent space in order to define vector fields. - Interpretability and naturality, i.e. connection to established statistical and physical theory. - Numerically stability of the metric over the entire manifold. The use of the FR metric satisfies both points 1 and 2 above and more and point 4 is achieved via the sphere map. In addition, we argue that the pervasiveness of the use of the forward KL divergence in generative models such as diffusion, flows, and VAE, testifies to its naturalness. As to point 3, the KL divergence is natural due to its strong link to fundamental concepts from Information Theory, being interpretable in terms of entropy. In this sense, KL is perhaps the simplest natural divergence over categorical probabilities. We now explain why our choice does not lead to circular reasoning using Prop 1. Note that Prop 1 starts off with hypotheses over a generic metric “g” in the $W_{2,g}$ distance from the hypothesis. The thesis of the proposition shows that natural gradient with respect to KL-divergence-based $W_{KL}$ distance produces naturally the $W_{2, g_{FR}}$ gradient flow, with now “$g_{FR}$” being FR metric, which was not assumed in the hypotheses. Thus this is not a circular statement, as it deduces FR as the limit of KL-divergence, at the level of probability spaces. >Are there computational reasons to use this? The FR metric leads to theoretical benefits as it is also the Wasserstein geodesic over probability paths on the simplex. The Euclidean metric, as explored in Dirichlet Flow Matching, does not enjoy such benefits and arguably is one reason that is empirically less performant than Fisher-Flows. >In Section 3.4, a similar problem arises… We believe there may be a slight misunderstanding as the logic of our setup is to first pick a metric and then show OT is computationally feasible. Indeed other metrics are possible and will lead to different OT maps; but, crucially, they may not be easy to compute for generative models. Fortunately, for the FR metric the distance is simply the distance on a sphere, which is **easy to compute** allowing us to efficiently compute the OT cost $c(x_0, x_1)$. >Why do we care about the lengths of paths defined by the Fisher-Rao metric? … velocity fields faster to simulate? Shorter paths lead to lower variance training: on the sphere and the simplex. The paths do not cross at intermediate points $x_t$, which would have added to noisier training—in line with the literature see [1,2]. We note OT often leads to vector fields that are faster to simulate using numerical solvers as they incur smaller per step error, as studied for $SE(3)$ in [3]. >Here do you mean to imply something about this choice of p_t? … Prop. 2 shows that, by using FR geodesics as our flow, we automatically use the geodesic path between $p_0$ and $p_1$ with respect to the natural Wasserstein metric over probability measures. A consequence of this is that the links to kinetic energy and dynamical formulations apply (see Prop. 4). ## Empirical validation >There is no experiment that uses a flexible source distribution We invite the reviewer to kindly read our global response which includes new experiments over text that uses a new source distribution which is a convex combination of a specialised mask token and the uniform distribution. >It's unclear why the sphere map helps …. The metric appears when we compute $x_t$ due to exp and log map on the simplex (see Appendix B). Looking at equation 11, for the exp map, we notice we need to compute the norm which is induced by the Riemannian metric so $<v^2_p, v^2_p>_{FR}$. Similarly, the log map on the simplex in equation 12 explicitly requires the inner product. Exploiting the sphere map, we obtain $x_t$ and avoid numerical instability near the boundary of the manifold. Other metrics beyond the FR are indeed possible on the simplex but it may not have analytic formulae for $x_t$. >Similarly, I think a good ablation to do is just to use the Euclidean metric here … Please see our global response PDF where we have included the Euclidean metric in Linear Flow Matching on the toy datasets as well as our DNA experiments. ## Clarifying experimental setup In addition to DNA sequences we have now also included more standard generative modelling experiments on molecules and larger scale language modelling that demonstrate the scalability of Fisher-Flows on a host of different domains. We thank the reviewer again for their valuable feedback. We hope that our rebuttal addresses their questions and concerns, and we kindly ask the reviewer to consider fresher evaluation of our paper if the reviewer is satisfied with our responses. We are also more than happy to answer any further questions that arise. ## References [1] Bose et. al (2024). SE(3)-Stochastic Flow Matching for Protein Backbone Generation. arXiv preprint arXiv:2310.02391. [2] Tong et. al. Improving and generalizing flow-based generative models with minibatch optimal transport. arXiv preprint 2302.00482, 2023. [3] Klein et. al "Equivariant flow matching." Advances in Neural Information Processing Systems 36 (2024). --- Rebuttal Comment 1.1: Title: Kindly awaiting more feedback Comment: We thank you again for your time and feedback that allowed us to strengthen the paper with new experiments. As the end of the rebuttal period is fast approaching we were wondering if our answers in the rebuttal were sufficient enough to address the important concerns raised regarding 1.) the justification of using the Fisher-Rao metric and 2.) Empirical validation. We note that for point 2.) we included a host of new experiments, including a large-scale experiment on text on the LM1B dataset where we achieved SOTA among discrete diffusion and flow matching models. We would be happy to engage in any further discussion that the reviewer finds pertinent, please let us know! Finally, we are very appreciative of your time and effort in this rebuttal period and hope our answers are detailed enough to convince you to potentially upgrade your score if you believe it's merited.
Summary: The paper proposed a novel flow-based generative model called *Fisher-Flow* for discrete data. The model uses the Fisher metric to deduce a Riemmanian geometric structure of the statistical manifold. The authors also demonstrated the connections to natural gradient descent and optimal transport. Experiments on DNA datasets demonstrated better performance. Strengths: 1. The geometric perspective of the manifold of categorical distributions, to the best of my knowledge, is a novel extension of flow matching models for discrete data. The use of the Fisher-Rao metric can induce a Riemannian structure for continuous parameterization. 2. The authors proposed a diffeomorphism to the unit sphere to avoid numerical instability during training and demonstrated better performance in experiments. 3. The authors provided connections between the proposed model and natural gradient descent and Riemannian optimal transport with mathematical proof. The proposed model can potentially share such theoretical benefits. Ablation studies also supported such claims. Weaknesses: 1. Although being claimed as a general discrete generation model, **the proposed model was only tested on the specific task of DNA generation without further justification of its performance on more complex data such as image and text generation**. The two DNA datasets used in the paper have a small cardinality of 4 categories. Existing diffusion- or flow-based discrete generation models (e.g. BFN [1], D3PM [2], SEDD [3]) demonstrated good results on image generation (discretized pixel values or VQVAE tokens) and/or text generation (tokens) tasks which have higher cardinalities to show their scalability. It remains unclear whether the proposed model can be adapted for more general real-world tasks with a larger number of categories. 2. The choice of baselines is **too limited to be convincing enough** to demonstrate the proposed model's superior performance. 1. For the density estimation task, **no baseline was compared** (results were more of an ablation study). For the toy data in higher dimensions, **only Dirichlet FM was compared**. For these two tasks on simplex, models like multinomial flow [4] should be tested to provide a comparison. 2. For the enhancer DNA design task, **only Dirichlet FM was compared as a flow-/diffusion-based baseline**, despite that many other discrete diffusion models have been proposed, e.g., BFN [1], D3PM [2], SEDD [3]. Some of them were included in the related work section but none was tested as baselines in this task. 3. Moreover, one simple but effective and commonly used baseline is missing across the board, which is linear flow matching on simplex (which was used in DirichletFM [5] and other multimodal flow matching papers as well). 3. Another critical concern lies in the **inconsistency of the reported performance of the same baseline models in this paper and the original paper**. Numbers for the Dirichlet FM reported in this paper for the promoter design task are **significantly worse** than those in the original paper. The original paper reported an MSE of 0.0269 for their best model (Table 1 in [5]) which is far better than the number indicated in this paper (0.034, Table 1 in this paper). Moreover, the linear flow matching model that achieved 0.0281 in Table 1 of [5] (outperforming the MSE of FisherFlow) was dropped in this paper. Similar large discrepancies in the baseline model's performance can be also noted in the FBD scores for enhancer design. Comparing Table 2 in [5] and Table 3 in this paper, the performance for the baseline Dirichlet FM was also significantly worse. As the numbers for other baselines were directly copied without change, it is unclear why the authors reported DirichletFM's performance differently. **Using the original MSEs or scores, the results indicated that the proposed model was not as good as Dirichlet FM or even Linear FM**. 4. In terms of evaluation metrics, **it is unclear whether perplexity is a valid metric** to compare flow/diffusion based models against language models. Perplexity was initially proposed for autoregressive language models, in which it was calculated directly over conditional probabilities of discrete tokens: $$ PPL(X)=\exp\left(-\frac{1}{N}\sum_{k=1}^N\log p_\theta(x_k|x_{<k})\right) $$ However, in the continuous flow/diffusion setting, such probabilities may **not be well-defined** as they are neither autoregressive nor defined over discrete space. Therefore, their PPL cannot be calculated in this way. If the authors instead tried to calculate the joint distribution $\log p_\theta(x_{1:N})$, it remains unclear how to leverage the unconditional flow model to calculate this log-likelihood for arbitrary given input sequence $x_{1:N}$, as the flow is defined over the continuous simplex space with infinite possible initial probability samples $p_0$. The authors did not explain how they were able to calculate the PPL in the paper, and it is unclear whether the derived PPL is comparable to the ones computed for autoregressive models. [1] Graves, Alex, et al. "Bayesian flow networks." *arXiv preprint arXiv:2308.07037* (2023). [2] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." *Advances in Neural Information Processing Systems* 34 (2021): 17981-17993. [3] Lou, Aaron, Chenlin Meng, and Stefano Ermon. "Discrete diffusion language modeling by estimating the ratios of the data distribution." arXiv preprint arXiv:2310.16834 (2023). [4] Hoogeboom, Emiel, et al. "Argmax flows and multinomial diffusion: Learning categorical distributions." *Advances in Neural Information Processing Systems* 34 (2021): 12454-12465. [5] Stark, Hannes, et al. "Dirichlet flow matching with applications to dna sequence design." *arXiv preprint arXiv:2402.05841* (2024). directly Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Previous existing diffusion- or flow-based discrete generative models have achieved considerable success in image and text domains (see Weakness 1). As a general discrete generation model, will the proposed model also achieve comparable performance on these image or text generation tasks? Will it have scalability issues regarding datasets with a large number of categories? More experiments on other generative domains will be needed to demonstrate the model's effectiveness and scalability. 2. Can you provide more baselines in the visualization of the density estimation toy dataset and some quantitative evaluation metrics for this task to better demonstrate the generation quality? With the wide range of existing discrete diffusion or flow based models, more should also be compared as baselines for both toy examples and the enhancer generation task. Specifically, will the proposed model outperform simple linear flow matching? See Weakness 2 for some existing diffusion or flow baselines. 3. Why the original numbers from the Dirichlet FM paper were not used? If you reran the experiments in the Dirichlet FM paper, can you provide justifications for why you only reran the ones for Dirichlet FM but not for other baselines like linear flow matching? It is also unclear why the results of linear flow matching in the original Dirichlet FM paper were omitted -- it has better performance than FisherFlow according to the Dirichlet FM paper. The reported numbers in this paper were **significantly worse** than those in the original paper. If original numbers were used, the proposed model cannot outperform the Dirichlet FM model. See Weakness 3 for details. 4. How was perplexity calculated for flow-based models? Is it calculated in an autoregressive fashion as the language models? (See Weakness 4) Flow-based model is not an autoregressive model. During generation, all tokens are simultaneously denoised into meaningful sequences. If you instead modeled the joint distribution $\log p_\theta(x_{1:N})$, how was this marginal log-likelihood calculated for arbitrary input sequence $x_{1:N}$? How can you deal with the randomness in the initial samples $p_0$ if a deterministic cross-entropy loss needs to be computed? Furthermore, how was perplexity calculated for random sequences? The log probability $\log p(x)$ for one-hot random sequences should either be 0 or infinite. 5. OT seems to lead to better performance. Will it significantly add to the computation time during training? 6. There is a notation inconsistency in the paper about whether to refer to the Dirichlet flow matching model as *Dirichlet FM* (e.g., Table 1 & 2) or *DFM* (main text). Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations were mentioned in the conclusion section, and the potential negative societal impact was adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time and effort they spent on reviewing our work. We are appreciative of the fact that the reviewer found our geometric perspective a “novel extension” to flow matching and that this leads to numerically stable training which is supported by better empirical performance. We now address the concerns raised by the reviewer. ## Experiments beyond DNA We acknowledge the reviewer's valid concern. In this rebuttal we included two more standard experimental settings for discrete data in molecular generation (QM9) and language modelling over text (LM1B). We kindly invite the reviewer to read our global response, and the 1pg rebuttal PDF, which gives more detail on these experiments. In summary, we found Fisher-Flows to be scalable to over 800k categories in text and achieve the best upper bound to text PPL amongst discrete diffusion models. For QM9 our results outperform flow matching and are competitive, almost saturating the task, with molecular diffusion models. We hope that our new experiments sufficiently demonstrates the scalability of Fisher-Flows to much higher dimensions and across a greater breadth of domains. ## Choice of baselines As suggested by the reviewer we’ve now included multiple baselines as part of our 1pg rebuttal PDF. Specifically, we added Linear Flow Matching (Euclidean metric) and Multinomial Flows to the synthetic task, where we found the latter to be numerically unstable as we increased the number of categories. We also added Linear Flow Matching to our DNA Promoter and Enhancer experiments where we found Fisher-Flows to be on par. For our new language modelling experiment on LM1B we compared against discrete diffusion models such as SEDD as well D3PM, as well as SOTA concurrent work in masked discrete diffusion in MDLM [1]. We found that Fisher-Flows still outperforms previous methods like D3PM and SEDD and is marginally better than MDLM. ## Reporting of Dirichlet Flow Matching results >“Why the original numbers from the Dirichlet FM paper were not used? Using the reference code implementation of DFM we noticed severe methodological issues that made it clear that the original numbers in their paper were an inaccurate reflection of true performance. We found the following flaws in their experimental setup: We found for the DNA datasets the provided command used `--subset_train_as_val` argument which sets the validation set as the training set. This meant that their reported metrics were measured on the training set. We found their released DNA classifier to have only $11%$ test accuracy which makes it a poor choice as a representation space to compute FBD. Their synthetic data experiment did not standardise a training and validation split and meant that their training set size was a function of run time—effectively infinite. Operationally, DFM was trained for $10^{10}$ points per epoch which is a tremendous amount for a synthetic task. We standardised the experiment to train all models with a more modest 10^6 points. After fixing these issues we reran DFM and found that it performed worse than what was initially reported. We are appreciative of the DFM authors for releasing their code that enabled us to find these experimental flaws, some which we outlined in Appendix F.4.2 and F.4.3 but will discuss further in our updated manuscript. ## Reporting PPL as a metric We agree with the reviewer that calling our reported result perplexity on DNA is inaccurate. To compute our metric for DNA sequences we used the continuous time change of variable of the flow along the ODE trajectory over the manifold to compute the log-likelihood of the observed sequence (as done in Riemannian Flow Matching). Since the output space is discrete this likelihood on the manifold is an upper bound to the discrete log likelihood. More accurately this is a valid Evidence Lower Bound (ELBO) which was proven in concurrent masked discrete diffusion models [1, 2]. We also note that for autoregressive models the used PPL metric is a specific factorisation over the joint $\log p(x_{1:N})$ and any valid ELBO is comparable. Nevertheless, the reviewer is correct and we will amend our reported results by indicating this upperbound when reporting PPL for diffusion/flow matching models. Finally, we have also included a new Gen PPL eval metric for DNA sequences. This is the PPL supplied a larger pre-trained autoregressive model on the outputs samples of a generative model. We found this metric could not distinguish the performance of DFM and Fisher-Flows, but we report it for completeness. ## Q1 Please see our global response for new larger scale experiments, including on text. ## Q2 Please see above about additional baselines as well as our global response. ## Q3 Please see our response above which finds flaws in the empirical setup of DFM. ## Q4 We computed an upper bound to the PPL using the continuous time change of variable formula associated with an ODE. This is a valid ELBO see [1-2] for proof. ## Q5 OT in Fisher-Flows is not computationally expensive as both the sphere and the simplex have closed form distance functions which allows us to efficiently calculate the OT plan over mini-batches. This adds minimal overhead to training and is not affected by scaling. ## Q6 Thank you for pointing this out. We will fix it in the updated paper. ## Conclusion We thank the reviewer again for their time. We believe we have answered to the best of our ability all the great questions raised by the reviewer. We hope our answers allow the reviewer to consider potentially upgrading their score if they see fit. We are also more than happy to answer any further questions. ## References [1] Sahoo et al. "Simple and Effective Masked Diffusion Language Models." arXiv preprint arXiv:2406.07524 (2024). [2] Shi et al. "Simplified and Generalized Masked Diffusion for Discrete Data." arXiv preprint arXiv:2406.04329 (2024). --- Rebuttal Comment 1.1: Comment: I thank the authors for their efforts in addressing my concerns and questions. Specifically, I appreciate that the authors have followed my suggestions to add additional experiments in other domains including graph generation and text generation. The additional results look interesting. Specifically, the LM1B results look somewhat promising, although it seems the results on graph generation are not very strong. However, my concerns regarding the validity of the DNA design benchmark still remain. ## Regarding PPL calculation I appreciate that the authors have provided an alternative PPL derived from recent concurrent work. I noticed the MDLM paper relies on masked diffusion, so I would appreciate further clarifications from the authors on the calculation of PPL. Specifically, did you calculate the NLL according to Equation 10 in the MDLM paper as $$ \mathcal{L}\_\text{NELBO}=\mathbb{E}\_q\int\_{t=0}^{t=1}-\frac{1}{1-t}\log\langle x\_\theta(z\_t,t),x\rangle dt $$ How did you choose the noised data $z_t$? Did you use the standard Euclidean inner product or the Fisher inner product defined in your Eq.5? ## Regarding Baselines for DNA Design The authors' response to my concerns regarding the baseline results for DNA design does not seem to have solid grounds, for which I further elaborate as follows. > the provided command used --subset_train_as_val argument which sets the validation set as the training set. This meant that their reported metrics were measured on the training set. This is true for enhancer design, but not true for promoter design in which the model is evaluated on the test set (according to the command in the DFM repo). > Their synthetic data experiment did not standardise a training and validation split and meant that their training set size was a function of run time—effectively infinite. Operationally, DFM was trained for $10^{10}$ points per epoch which is a tremendous amount for a synthetic task. We standardised the experiment to train all models with a more modest 10^6 points. This is the setting for the toy experiments but not for promoter design & enhancer design. According to the DFM paper Appendix B.1, it was trained for 200 epochs for promoter design and 800 epochs for enhancer design. In FisherFlow, Appendix F.4 indicated it was trained for $200000 \times 256 / 88470 \approx 579$ epochs for promoter design and $450000 \times 256 / 70892 \approx 1625$ epochs for enhancer design. In both cases, FisherFlow was trained on a doubling of data. Moreover, **the authors didn't respond to my concerns regarding the discrepancy of the results in the promoter design task**. To give a more concrete evaluation, I tried the DFM repo to test their provided checkpoint on the test set. **Five repeated experiments give an MSE of 0.02697 ± 0.00024**, which matches the reported number in the original DFM paper but is significantly smaller than the reported number in this paper. Furthermore, the new results for the Linear FM baseline in the authors' rebuttal are significantly worse than those reported in the DFM paper (despite that, the Linear FM still achieves better PPL than FisherFlow). Given that the DFM checkpoint matches the reported MSE, I would consider the result in the DFM paper for Linear FM to be more credible. It is also unclear, as I have mentioned in my review, why the authors reran the experiments *only* for DFM but copied the results for all the other baseline models directly from DFM for promoter design. If the authors decided to use an alternative experimental setting, they should rerun all the baselines to be consistent and comparable. In conclusion, I think the authors did not provide a convincing explanation regarding the discrepancy in the results of the promoter design task. The reported results for this task do not seem credible to me. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging with us in this rebuttal and for their recommendations that improve the empirical caliber of this paper. We now answer the questions raised in their response. ## PPL Calculation For our new experiments, note that we outlined how the intermediate noisy distribution, $p_t$, is constructed, in the global response. This $z_t = x_t \sim p_t$ is a convex combination of 2 paths, a noisy path from a masked prior to $x_0$, and a noisy path from a uniform prior also to $x_0$. Each path for us uses the Fisher geodesic. Note that for a masked state to $x_0$ we traverse the edge of the sphere along its natural geodesic automatically. >Did you use the standard Euclidean inner product or the Fisher inner product defined in your Eq.5? To compute the NLL, we use the provided MDLM code which computes the NELBO as $-\log p(x_0 | x_t)$ integrated across time. We use the Fisher inner-product on the Sphere in eqn 10 of MDLM to compute $<x_{\theta}(x_t, t) , x_0>$. ## Further clarifications on DNA experiments We appreciate the reviewer's efforts in helping increase the empirical transparency of our DNA experiments and that our code is also included in this submission. Moreover, **we commit to releasing all of our DNA experiment code (already included in the supplementary zip file) to enhance reproducibility.** We thank the reviewer for agreeing on the experimental issues in the DFM code for the synthetic and DNA Enhancer experiments. The former was trained on an unreasonable amount of data; the latter was trained and evaluated on the training set due to the use of `--subset_train_as_val` flag. These issues in different experiments we argue justify our decision to build a more rigorous setup for all experiments, including DNA Promoter. >I tried the DFM repo to test their provided checkpoint on the test set. Five repeated experiments give an MSE of 0.02697 ± 0.00024, We appreciate the reviewer taking the time to rerun DFM experiments in their original codebase with their checkpoint. Our reproduction inherits the exact same model architecture and code for DFM and our experimental protocol primarily addresses the dataset and evaluation pipelines. Their public code only provides **a single trained checkpoint** and, as a result, their reported MSE in Tab 1. of 0.0269 does not contain stds over runs. We assume that the reviewer reran the Promoter experiments with 5 seeds using this checkpoint which quantifies the performance of DFM only on this trained checkpoint, *with the randomness being due to the sampling of the prior*. In contrast, our DFM reproduction retrains the model from scratch 5 times and then evaluates this method. We argue that this is a more robust experimental setup as the randomness is over the entire training procedure and limits the cherry-picking of a preferred run/checkpoint. Our top seed matched the original reported result in DFM but the mean over 5 seeds is worse. Consequently, we attribute the difference in reported numbers due to randomness over 5 runs vs. their 1 run. We hope our answer here provides more clarity and credibility to the reproduction of our DNA Promoter experiments. >[...] FisherFlow was trained on a doubling of data. Since we harmonized DFM, Linear, and Fisher-Flow implementations in our codebases both DFM and Fisher Flows are trained on the exact same amount of data. We evaluated all models on the test set using the best validation MSE (Appendix F.4.2), so the total quantity of data seen as a function of epochs is, very respectfully, not a valid concern. > Linear FM still achieves better PPL than FisherFlow We note that the std of Fisher-Flows overlaps with LinearFM in our 1pg PDF as such it is within the margin of error. Given the limited time frame of the rebuttal, we only had time to run 1 training seed for LinearFM but we will include 5 seeds in our updated paper. > Why the authors reran the experiments only for DFM but copied the results for all the other baseline models directly from DFM We found in the original DFM paper (Tab. 2) that the promoter baseline numbers, outside of the language model, were taken from [1] and, as a result, felt that these baselines were decoupled enough from the DFM codebase that they did not suffer the same flaws in the experimental setup. We reran the LM, Linear, and DFM baselines in our experimental setup as they were included in their original DFM codebase which gave us an opportunity to report 5 seeds. We also note the other baselines were not included in the original DFM codebase. We will update our paper to include the origin of all baselines. We hope our response here alleviates the concerns raised by the reviewer and allows the reviewer to endorse our paper more strongly. We are also happy to answer any further questions, please let us know. [1] Avdeyev, Pavel, et al. "Dirichlet diffusion score model for biological sequence generation." International Conference on Machine Learning. PMLR, 2023. --- Reply to Comment 1.1.2: Title: Kindly awaiting further discussion Comment: Dear Reviewer, We thank you for all of your time and effort in engaging with us during this rebuttal period. As the deadline for this period is fast approaching we were wondering if there were any further clarifications needed from our responses so that we share the same view regarding the chief concern regarding the reproduction of the DNA promoter experiments and or the new experiments added in the global response. We highlight the following facts that may aid clarity and transparency: 1.) Our DFM reproduction and Fisher-Flows were trained in the same setup and we used validation MSE to select the best model to evaluate. So no models saw *more data* 2.) Our experiments used 5 random seeds of the model's retrained from scratch each time and our best DFM result matched the reported paper and released checkpoint result of DFM, but the mean was higher 3.) All other baselines were taken from Ayedev et. al 2023, which is also the case for DFM. We hope our rebuttal responses have helped strengthen the reviewer's view on our experiments and we are more than happy to engage in any further discussion before the end of the rebuttal period. We thank the reviewer again for their time and if our responses and new experiments merit it we would also be appreciative if the reviewer would kindly consider a fresh evaluation of our work.
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and constructive comments. We are grateful that the reviewers appreciated the novelty of our geometric perspective to discrete generative modelling (R 11qA) and that it is a natural application of Riemannian flow matching over simplices (R cNL3). We are also heartened to hear that the reviewers agree that our parametrization of flows over the sphere is beneficial for numerical stability and leading to demonstrated empirical benefits with little computation overhead (R 11qA, YHTA). We now address shared concerns raised by the reviewers, and summarise our new experiments and ablations included in the supplementary PDF. ## New experiments on complex data (R 11qA, R cNL3) The reviewers expressed concerns regarding the empirical verification of Fisher-Flows against other domains. We included, in our rebuttal PDF, 2 additional standard discrete generative modelling tasks: molecule generation (QM9), and language modelling on the One Billion Words Dataset (LM1B). These new domains enable us to test the scalability of Fisher-Flows against SOTA discrete generative models for molecules and text. ### Experimental setup and Flow parameterization To enable scalable and effective training of our flows, we parametrize the loss using the target prediction framework by learning a denoising network to predict the target discrete target token given $x_t$. We note that the vector field can easily be recovered for inference and used to predict the next Euler step—thus no extra cost is added to sampling. ### Language Modelling We train Fisher-Flows on the LM1B dataset, which has a vocabulary size of ~800k words. This experiment seeks to address the scalability concerns as well as enable direct comparison to a more standard discrete generative modelling setting. We define Fisher-Flows by using the following probability path $p_t = \kappa_t * p_{M} + (1 - kappa_t) * p_{unif}$, where $\kappa_t \in [0,1]$. Here $p_{M}$ is the Fisher probability path between the target $x_0$ and designated mask token $M$, while $p_{unif}$ is also a Fisher probability path from a sample from a uniform distribution and $x_0$. Thus $p_t$ is a convex combination of probability paths using a noise scheduler $\kappa_t$. Using a denoising architecture enables us to rewrite the original loss as a weighted negative log likelihood $L = \mathbb{E}[-\log p(x_0 | x_t)]]$. Crucially, this allows us to calculate an upper bound to the test perplexity, which is a natural evaluation metric for language modelling and employed by concurrent SOTA works on discrete diffusion MDLM and MD4 [1, 2], who prove that this is a valid Evidence Lower Bound. We report our results in Table 2 of the 1pg PDF and note that Fisher-Flows is the best performing discrete diffusion/flow-matching model achieving a test perplexity of $\leq 26.51$ and $\leq 22.42$ after training on $33B$ and $327B$ tokens respectively. We note that our method outperforms the baselines suggested by the reviewer of D3PM, SEDD, as well as marginally outperforming concurrent masked diffusion MDLM [1]. ### Molecule Generation We instantiate Fisher-Flows on QM9 where four different data types must be produced: positions, charges, atom types, and the possible bonds between them. Furthermore, the positions are continuous rather than discrete which allows us to showcase the feasibility of using Fisher-Flows in a mixed setting of discrete and continuous data. We report our results in Table 3 of the rebuttal PDF. We find that Fisher Flows outperforms a comparable flow matching baseline in EquiFM [3] and is competitive with a diffusion baseline in Jodo [4], all saturating the benchmark. We also include examples of generated samples in Fig. 2 from Fisher Flows to qualitatively evaluate our approach. ## New ablations and evaluation metrics (R 11qA, RcNL3, YHTA) To make our comparisons more robust, we extended the synthetic experiment with a variety of new baselines; the results are shown in Fig 1 of the rebuttal PDF. We included Linear Flow matching, D3PM, and Multinomial Flow, as suggested by the reviewers. Fisher-Flows clearly outperform Linear FM across all dimensions. Although between dimensions 40 and 100/120 Fisher-Flow performs marginally worse than both D3PM and Multinomial Flows, past dimensions 100 and 120 these two’s KL divergences diverge to infinity (hence their absence from the figure) indicating that some categories have been sampled essentially zero times. Our Fisher-Flow parametrization does not incur this numerical failure mode. ### Generative Perplexity We updated our DNA sequence results in Table 1 of the rebuttal PDF by first amending the reported Perplexity as an upper bound to the true test perplexity, as our flows yield a valid ELBO. In addition, we ålso included a common evaluation method (Gen-PPL) in the perplexity induced by autoregressive language models that we train on the same dataset as the evaluated generative model. We train small GPT2 DNA language models on respective datasets and assess the flow model’s DNA generations under these. Random sequences obtain gen-ppls of over 4 for Promoter DNA, fly-brain and melanoma DNA LMs. The gen-ppl values for Fisher-Flows, Dirichlet Flow Matching and Linear Flow matching for all DNA datasets are all around 1, exhibiting no clear winner under this metric. We also include a Linear Flow Matching baseline and find that Fisher Flows is on par and both are better than Dirichlet Flow Matching. ## References [1] Sahoo et al. "Simple and Effective Masked Diffusion Language Models." arXiv preprint arXiv:2406.07524 (2024). [2] Shi et al. "Simplified and Generalized Masked Diffusion for Discrete Data." arXiv preprint arXiv:2406.04329 (2024). [3] Song et al. "Equivariant flow matching with hybrid probability transport." arXiv preprint arXiv:2312.07168 (2023). [4] Huang et al. "Learning joint 2d & 3d diffusion models for complete molecule generation." arXiv preprint arXiv:2305.12347 (2023). Pdf: /pdf/0d48f8bb2579cd31504501a24829af297f34fb07.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sparse High Rank Adapters
Accept (poster)
Summary: This work proposes a new PEFT method, SHiRA, which finetunes 1-2% of pretrained model weights. The authors demonstrate that the resulting sparse adapter weights can be combined in multi-adapter settings with less concept loss than LoRA as the sparse adapters are mostly orthogonal. Further, the authors empirically demonstrate that the scatter operation used in SHiRA has a lower latency than fusing LoRA weights at inference time for model embedding dimensions > 1024. Strengths: * This is a timely work as adapters as the multi-adapter setting offers great promise for memory constrained devices such as mobile phones. * The proposed method outperforms LoRA across a diverse set of tasks and models. * The paper is presented well, with clear figures, tables, and discussion. Weaknesses: * The primary weakness is the relatively modest novelty of this work in the context of Diff Pruning [1] and SFT [2]. Both of these prior works follow a very similar strategy as SHiRA, namely, fine tuning a very small, sparse set of the original pretrained parameters. In my view, the most unique aspect of SHiRA is the emphasis on composability in the multi-adapter setting and related analysis. A comparison between the performance differences between SFT-AG and SHiRA would help establish the potential benefits / drawbacks of using dynamic mask selection (SFT) vs. static (SHiRA). In any case, [2] is a very relevant work which should be discussed in the related work section. * The primary motivation for this work is low latency adapter switching, providing end-to-end profiling would be much more convincing than focusing only on the scatter-op as in Appendix B. Perhaps Appendix J was originally intended for this analysis? It is unclear how significant the overall effect of the latency reductions in Appendix B are in the context of online or batched inference on an edge device. Does SHiRA provide any latency benefit for smaller embedding dimensions such as those used in StableDiffusion? * One of the main benefits of LoRA is the reduced memory footprint for fine-tuning with adaptive optimizers such as Adam. Due to the momentum buffers typically requiring full float precision, LoRA greatly reduces the memory overhead required for fine-tuning. In contrast, it appears that based on Appendix C.3 SHiRA must materialize the full grad buffers for the pretrained weights which are selectively set to zero for frozen parameters with a post gradient accumulation hook. I believe SHiRA could be reparameterized as $W_{new} = W_{pretrained} + \theta_{shira}$ such that only the $\theta_{shira}$ parameter tracks gradients and therefore Adam would not allocate buffers for every parameter in $W_{pretrained}$. Acknowledging the difference in memory footprint between LoRA and SHiRA during training and suggesting how this overhead may be avoided would help extend SHiRA to low-memory PEFT settings. * The pruning criterion studied for mask initialization could be expanded to include some additional, more modern criteria designed specifically for transformer based models such as Movement Pruning [3] and Wanda [4]. Further discussion on the surprisingly high performance of the SHiRA-Rand mask would be beneficial as well. * An analysis on how to distribute the sparse fine-tuned parameters across layers was not discussed. It would be particularly interesting to explore how to allocate the fine-tuning parameter budget across various modules in the network. For instance, how does the uniform strategy showcased here compare with OWL [5]? [1] D. Guo, A. M. Rush, and Y. Kim, “Parameter-Efficient Transfer Learning with Diff Pruning.” arXiv, Jun. 09, 2021. doi: 10.48550/arXiv.2012.07463. [2] A. Ansell, I. Vulić, H. Sterz, A. Korhonen, and E. M. Ponti, “Scaling Sparse Fine-Tuning to Large Language Models.” arXiv, Jan. 29, 2024. doi: 10.48550/arXiv.2401.16405. [3] V. Sanh, T. Wolf, and A. Rush, “Movement Pruning: Adaptive Sparsity by Fine-Tuning,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2020, pp. 20378–20389 [4] M. Sun, Z. Liu, A. Bair, and J. Z. Kolter, “A Simple and Effective Pruning Approach for Large Language Models.” arXiv, Jun. 20, 2023. doi: 10.48550/arXiv.2306.11695. [5] L. Yin et al., “Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity.” arXiv, Oct. 08, 2023. doi: 10.48550/arXiv.2310.05175. Technical Quality: 3 Clarity: 3 Questions for Authors: ## Questions * What are the key differences between SHiRA and SFT? How does SFT compare with SHiRA in terms of generalization performance? * What are the end-to-end latency profiles of LoRA vs. SHiRA when performing both online and batched inference? I would be interested in the profiles for both the single adapter case and the multi-adapter setting. How does the latency scale as the number of adapters is increased in the multi-adapter setting for both methods? Based on Figure 7, it does not appear that SHiRA would benefit StableDiffusion 1.5 since it has a much smaller embedding dimensions than Llama (320 for the UNet vs. 4096), does SHiRA provide a latency benefit at this small embedding dimension? * What is the memory overhead required for fine-tuning with SHiRA vs. LoRA as currently implemented? Can SHiRA be engineered to be more memory efficient than currently implemented, and if so, how? * What is the effect of the dataset order in the SHiRA-WM-Non-Overlap setting? Is there an appreciable difference in performance for the dataset that is fine-tuned on the top 1% of weights vs. those datasets trained with lower magnitude weights? * The performance of SHiRA-WM-Overlap was surprising, I speculate that this may be due to the three datasets being relatively similar (QA based reasoning). Does SHiRA-WM-Overlap maintain its high performance when fine-tuned on very different datasets? For example, combining adapters trained for multiple-choice based QA reasoning and in context retrieval augmented QA (TriviaQA, for example). Another way to perform this analysis could be to compare the L2 distance between the pretrained weights and the individually trained single adapters vs. the distance between the single adapters. Are the single adapters trained on different datasets more similar to each other than the pretrained weights? * How are the number of trainable parameters determined? While performance remains competitive at 1%, it would be worthwhile to examine the trade-off between latency and generalization performance of smaller or larger adapters. * Did the authors experiment with different layerwise distributions of the fine-tuning parameter budget? For instance, only fine tuning the MHA modules or only the MLPs? Another interesting approach could be to focus the fine-tuning budget on earlier blocks as recent works [6] have found that these early blocks have a disproportionate impact on the model output. * In Appendix B.3, the SNIP code example determines the mask by selecting the topk elements in a gradient_dict member variable of the SFTTrainer class. Does this dict contain the products of the gradient and weight magnitudes? ## Suggestions: * Adding a discussion of PERP [7] to related work. While PERP aims at restoring LLM performance post pruning, it is related to this work in that it also finds that a very small portion of the pretrained parameters (<1% in some cases) can be used to fine-tune the network. * Add citation for PEFT library on L70 * The typical LoRA fusion formulation is $W_{new} = W + BA$. I suggest revising to match the original paper unless the authors prefer to explicitly define the shapes of matrices B and A. * The SHiRA-Rand baseline in Table 10 appears to be very strong. I note that random selection has been established as a robust baseline in the dynamic sparse training literature. Adding a discussion of the performance of the random mask and potentially expanding the main paper’s results to include the random mask would be of interest to the reader. [6] S. He, G. Sun, Z. Shen, and A. Li, “What Matters in Transformers? Not All Attention is Needed.” arXiv, Jul. 07, 2024. doi: 10.48550/arXiv.2406.15786. [7] M. Zimmer, M. Andoni, C. Spiegel, and S. Pokutta, “PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs.” arXiv, Dec. 23, 2023. doi: 10.48550/arXiv.2312.15230. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Adequately addressed; however, adding additional commentary on memory required for fine-tuning would help reader understand if this method is applicable to memory-constrained fine-tuning setups. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and appreciating the strengths of our work. Below we address the concerns: 1: Thanks for pointing out these related works. SFT is an excellent (and concurrent) work that aims to scale sparse finetuning to LLMs using dynamic masks. In contrast, SHiRA uses a static mask. We have now created a memory- and latency-efficient implementation for SHiRA based on the PEFT library. Our implementation uses the scatter-op during training (see section B in common response). A key difference between SFT and SHiRA is that dynamic mask-based SFT requires users to install custom sparse linear layer kernels (i.e., “linear-sd” in the official SFT code). This can sometimes make it non-trivial to run SFT particularly on different environments (e.g., if PEFT or CUDA versions are dramatically different). On the other hand, our static mask-based SHiRA PEFT implementation does not require any custom sparse kernel installations and uses pure Pytorch scatter-op during the training. Hence, an important benefit of SHiRA is its ease of training/inference deployment. Other differences between SHiRA and SFT include detailed analysis of multi-adapter fusion properties, including impact of sparsity on orthogonality between adapters, etc., which were not discussed in SFT paper. Also, SHiRA demonstrates its effectiveness on both vision and language tasks, whereas SFT only discusses the language tasks. We wanted to use SFT code to compare SFT vs. SHiRA on commonsense reasoning tasks which were not included in the SFT paper. Unfortunately, during the short rebuttal process, we were not able to complete these experiments due to certain environment issues on our end. We will include a head-to-head comparison in the next version of the paper. Nevertheless, SFT seems to be the very first work that scales partial finetuning to LLMs. We will cite this concurrent work (released nearly 3 months before the conference deadline), highlight its clear importance, and discuss all these differences in the related work section. 2: Please see section A in common response. 3: Please see section B in common response. 4: Given a mask, SHiRA can be easily extended to any pruning strategy. Designing an optimal mask is an important future direction. We will cite Movement Pruning/Wanda and discuss them in a future work section. The scope of our current work is only to demonstrate that even the most basic pruning criteria outperform LoRA. With our PEFT implementation, we have now provided a way to perform partial finetuning at lower training memory costs than LoRA. The rank of random sparse matrix has been studied extensively using random graph theoretic and combinatorial techniques [P1, P2]. Based on the results discussed in [P1, P2] the rank of the SHiRA-Rand adapter should be high. Empirically, we did observe that our SHiRA-Rand adapters were high rank which might explain their high accuracy. 5: SHiRA makes it easy for users to perform sparse finetuning without having to discover “the best parameters” to train. This is one of the biggest advantages of LoRA: users just specify a rank and then train the model. In a similar spirit, SHiRA users would just need to provide an application-dependent “adapter size”. Indeed, layerwise distribution of these parameters is of interest but beyond the scope of current study. We will discuss the OWL and reference 6 suggested by reviewer in future work. A similar analogy is how AdaLoRA [P3] equipped LoRA with layerwise ranks (instead of a constant rank for all layers) and SoRA [P4] explored adaptive ranks. Similar follow up studies can be conducted for SHiRA as well. [P1] On the rank of a random binary matrix. https://arxiv.org/abs/1806.04988 [P2] The rank of sparse random matrices. https://arxiv.org/pdf/1906.05757 [P3] AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023). https://arxiv.org/pdf/2303.10512 [P4] Sparse Low-rank Adaptation of Pre-trained Language Models (EMNLP 2023). https://arxiv.org/abs/2311.11696 Other questions: Most questions are addressed above. For remaining questions, please see below: Q2: [Single adapter vs. multi-adapter latency profiles and effect of small embedding dimensions]. In the unfused mode for LoRA, inference latency will keep increasing with number of adapters. For SHiRA, inference always happens in fused mode, so inference latency will always be equal to base model latency once the fusion is complete (which we have shown in Table S4 is much more efficient than LoRA fusion). On smaller embedding dimensions, we have shown significant latency benefits (for adapter fusion) on SDXL (2.6B params in SDXL vs. 7B params in LLaMA2). On fusion speed for smaller weight dimensions: We have plotted the data from Fig. 7 (Appendix B in the submitted paper) in semi-log-y format in Fig. S3. Clearly, we see 16x to 13x improvements in adapter loading using scatter-op for SHiRA. Hence, scatter-op is significantly faster than LoRA fusion even at smaller embedding dimensions. For realistic networks like SDXL and LLaMA2, end-to-end switching times are provided in Table S4 in rebuttal PDF. Therefore, we still see significant speed up (4.68x-5.71x) in adapter switching for real networks. Q4 [Dataset order for SHiRA non-overlap]. Yes, the accuracy of single adapters slightly changes when we train them on top 1% or top 1-2% or top 2-3%. This data can be inferred from Table 4 (submitted paper): Arc-easy and Boolq lose slight accuracy in the non-overlap case compared to their typical top-1% training checkpoints. Arc-Easy was trained on top 2-3% parameters for the non-overlap case and loses accuracy from 77.57% to 75.97%. BoolQ loses was trained on top 1-2% and it loses accuracy from 78.07% to 76.94%. This is highly application dependent and highlights why robust masks for SHiRA are an important direction for future research. Remaining minor concerns are in the official comment below due to lack of space. --- Rebuttal 2: Title: Remaining minor concerns (due to lack of space) Comment: Continued from Author Rebuttal… Q5 [L2 distance analysis for SHiRA overlap: Are adapters more similar to each other than to base model weights?] Table 4 (and section 5.3.2 in submitted paper) have results for unstructured SHiRA-WM masks. Fig. S2 (rebuttal PDF) shows AWOM and AWOR for unstructured SHiRA masks such as SHiRA-WM overlap and non-overlap masks. As evident, the $\mathcal{A}_1^T \mathcal{A}_2$ has very similar number of zeros for both overlap and non-overlap cases. This suggests that the relative orthogonality properties between SHiRA-overlap and non-overlap would be similar and that explains why SHiRA-overlap performs well. Hence, for unstructured masks, overlap and non-overlap adapters have similar orthogonality properties. Since overlap adapters are trained on top-1% weight magnitudes, they tend to achieve slightly higher single adapter accuracy. Table S5 (rebuttal PDF) shows the L2 analysis for the adapters trained in Table 4 (submitted paper). We compute the L2 distance between each adapter and the original pretrained weights (all adapters train top 1% weights in the overlap setting) as well as the L2 distance between each adapter. Clearly, each adapter is closer to the pretrained weights compared to the other adapters. This demonstrates that the tasks are sufficiently different. We hypothesize that the main reason for the good performance of SHiRA-WM-overlap is its orthogonality properties as shown in Fig. S2 (rebuttal PDF). Q6 [#trainable params] is purely a user defined metric and depends on what size adapters we want. For all experiments, adapter sizes are kept close to LoRA for a fair comparison. Q8 [gradient-dict?] Yes, that dictionary contains the products of the gradient and weight magnitudes. Thanks for the remaining suggestions. We will incorporate the rest of them in the next version of the paper. With these new results, the paper has improved significantly. We would greatly appreciate if the reviewer could increase the rating of our work. --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed rebuttal, clarifications, and additional analysis. I agree that SFT can be considered contemporaneous work. My other concerns regarding memory overhead and latency were adequately addressed by the rebuttal figures and discussion. Based on the above, I've elected to increase my inital score. --- Reply to Comment 2.1.1: Title: Thank you! Comment: Thank you so much for increasing the rating of our work and for the very detailed feedback. It genuinely improved the quality of our work.
Summary: This paper presents a PEFT method, which applies gradient masks for downstream adaptation. It claims three main contributions: rapid adapter switching, lower concept loss, and higher rank. Experiments are conducted in the area of LVMs and LLMs. The proposed method presents SOTA performance and adapter switching efficiency. Strengths: 1. Leaving the LoRA framework may indeed bring additional benefits, such as adapter switching and high rank, etc. The motivation of this article is intuitively reasonable. 2. This paper is easy to understand. 3. It is reasonable to use classic (initialization) pruning methods such as SNIP for adaptation. Weaknesses: 1. This article highlights too many contributions. I respectfully admit that there are lots of requirements to be met in the PEFT field, such as performance, efficiency, resource costs, adapter switching, etc. However, the premise for proposing a method that is good at adapter switching is that it must perform well under a single adapter set up and on tasks in a variety of fields. Compared with tasks such as image generation and LLaMA, more basic tasks may better reflect the strength of the PEFT method. Therefore, I suggest that the authors refer to methods like VeRA, LoRA to perform experiments. In short, GLUE and image classification tasks are suggested to be added. 2. For efficiency, which is also a very necessary property for PEFT, can the authors provide the *time per epoch* and *peak GPU memory usage* on the LLaMA-2-7B model or maybe larger models? This article repeatedly emphasizes that the proposed method has very few parameters, but this does not mean that it can be more efficient than LoRA. This is because the number of parameters has no absolute relationship with GPU cost and training time. To save trouble, you can only provide the efficiency comparison between proposed method and LoRA on the LLaMA-2-7B model under single adapter (when they achieve similar performance). 3. I think this is a strong claim for high rank. Many works show that high rank may be better, but it does not necessarily mean that the higher the rank, the better the effect. This phenomenon varies depending on the model and data. For example, [1] shows that if the rank is too high, LoRA's performance may deteriorate. This is intuitively reasonable (my personal view), because a large number of parameters does mean strong expressiveness, but it may lead to a more complex optimization process, making it difficult to converge to its peak capability. Therefore, high rank is an extremely strong conclusion, which needs to be verified by comprehensive experiments. I do not recommend that authors continue to claim this contribution unless there are more detailed experimental results for support. 4. Minor: Why not consider SynFlow as one of your strategies? Because it seems that you don't need to initialize the mask for all datasets with SynFlow. [1] Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning, ICLR 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: N.A. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and appreciating the strengths of our work and its motivation. Below we address the concerns listed in weaknesses section: 1: Thank you for noticing that we have a lot of contributions in our paper. To summarize our main contribution, we highlight that SHiRA – when used with even the most basic pruning metrics (such as weight- or gradient-magnitude, SNIP, structured masks, etc.) – significantly outperforms LoRA on a variety of large-scale tasks in both large vision and large language domains. Additionally, it brings the rapid switching benefits for deployment (by changing only 1-2% params) and encourages multi-adapter fusion (by having better orthogonality properties). As evident from Figs. 1, 4, 6 and Tables 1-4 (main paper), various types of SHiRA masks significantly outperform LoRA. Based on our current study with our selected types of masks, if we were to recommend one technique, we would recommend SHiRA-SNIP which performs consistently well across both vision and language problems. We further conducted more experiments on image classification and GLUE tasks using SHiRA-WM (since weight magnitude is the easiest mask to use). For image classification, we finetune Vision Transformer (ViT) using LoRA and SHiRA for four common transfer learning datasets, namely, CIFAR-10, CIFAR-100, Food101, and Describable Textures Dataset (DTD). Both methods have comparable #parameters around 300K. As shown in Table S1 (rebuttal pdf), we outperform LoRA on all image classification tasks. For GLUE, we use the code released by SoRA [P1] which relies on dynamically adjusting the ranks of the adapters. In Table S2 (rebuttal pdf), we report accuracy on four common GLUE tasks: QNLI, COLA, SST2, and MRPC. Accuracy numbers for LoRA and SoRA are directly taken from the SoRA paper since we are using the official code to run SHiRA experiments. As evident, with nearly 2x smaller adapter, SHiRA outperforms LoRA by 1.1% accuracy on average. Further, SHiRA achieves a similar accuracy as SoRA while being 30% smaller in adapter size. Indeed, SoRA cannot enable rapid switching like SHiRA. Therefore, we again demonstrate that a simple approach like SHiRA-WM outperforms LoRA and its advanced variants with a similar or significantly better accuracy while providing additional deployment benefits. 2: Please see section B in common response. 3: We agree with the reviewer that arbitrarily increasing the rank need not benefit the task at hand. It is important to note that recent studies described in [P3, P4] have identified performance gap of LoRA when compared with full model fine tuning. Therefore, techniques for higher rank adaptations are suggested in [P3, P4] with insights gleaned from various downstream tasks. Further, in [P2], the rank of the update for fine tuning a model is suggested to be related to the size of the model and on how it is trained. Importantly, our experiments with SHiRA result in higher rank updates without explicit assumptions on the rank. Hence, we do not need to explicitly set a rank for SHiRA (unlike LoRA). However, we completely agree with the reviewer that claims on the high rank warrant further studies. Therefore, we will discuss all these related works and adjust our claims accordingly. 4 (Minor): As mentioned before, our objective was to just use a few standard weight importance metrics from pruning literature for determining masks and to see if they outperform LoRA (which they do). Synflow is an unsigned version of SNIP (see Eq. 1 in [P5]). We agree with the reviewer that looking at the unsigned SNIP value to rank important weights could be interesting. We leave this ablation to future work. We expect it to work well because Synflow will select at least a subset of the same weights that were selected by SNIP. With these new results, the paper has improved significantly. We would greatly appreciate if the reviewer could increase the rating of our work. [P1] Sparse Low-rank Adaptation of Pre-trained Language Models (EMNLP 2023). https://arxiv.org/abs/2311.11696 [P2] Intrinsic dimensionality explains the effectiveness of language model fine-tuning. https://arxiv.org/abs/2012.13255 [P3] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning. https://arxiv.org/pdf/2405.12130 [P4] PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization. https://arxiv.org/pdf/2402.16141 [P5] Zero-Cost Proxies for Lightweight NAS (ICLR 2021). https://arxiv.org/pdf/2101.08134 --- Rebuttal Comment 1.1: Title: update Comment: Thanks to the author's rebuttal, I believe it solves most of my concerns. However, I still have one comment, which is a continuation of weakness 1. As a PEFT method applicable to multiple fields (even if it is only applicable to image generation), it should have relatively concentrated and easy-to-understand conclusions. For example, LoRA will tell users the approximate relationship between rank and performance. As for SHiRA, as a user, what mask strategy should I adopt? The author does not seem to give specific guidance or conclusions. In addition, a very personal view is why the author hides the random mask in most of the experiments? Judging from the results in Table 10, the random strategy is not bad. I never think that this strategy should not be shown and analyzed because it sounds simple. On the contrary, I think that if the author's core argument is centered around the following, I think the insight and contribution of this article will be more meaningful: Even the simplest random gradient mask is an extremely effective means of PEFT. For the ML community, simple and effective methods have always been respected and encouraged, especially for PEFT, a field that is highly relevant to engineering applications. Therefore, for my concern, the author does not need to add additional experiments. Can the author give a simple conclusion, for example, in what filed, what mask strategy is more effective? --- Reply to Comment 1.1.1: Title: Further clarification on conclusions Comment: We are very grateful to the reviewer for their feedback on our rebuttal. We are happy to see that our rebuttal addressed most of their concerns. We completely agree that it is important to have clear conclusions for any study. To address this final concern, we propose to include the following separate “Discussion” section in the paper before the “Conclusion” section, where we will summarize our key findings. ================================== __Discussion__ To summarize our main contribution, we highlight that SHiRA – when used with even the most basic pruning metrics (such as weight- or gradient-magnitude, SNIP, structured masks, etc.) – significantly outperforms LoRA on a variety of large-scale tasks in both large vision and large language domains. For LVM style transfer applications, we found that SHiRA-Struct is the most effective masking technique due to its special orthogonality properties that aid multi-adapter fusion. However, SHiRA-SNIP and SHiRA-Grad are not too far behind and achieve competitive performance as SHiRA-Struct. On the LLM commonsense reasoning side, SHiRA-SNIP is the best strategy out of the masking techniques we have considered in this work. Specifically, as discussed in the main paper, SHiRA-Struct did not achieve good results on the more complex commonsense reasoning tasks since it is a combination of a rank-1 + a highly sparse diagonal adapter. SHiRA-Grad on LLMs is about 0.8% worse accuracy than SHiRA-SNIP (76.6% vs. 77.4% average accuracy on commonsense reasoning for LLaMA-1). Therefore, in conclusion, for the applications/fields and the masking techniques considered in this paper, SHiRA-SNIP works well across both language and vision domains. Hence, we recommend that SHiRA-SNIP is one the strongest candidates for sparse finetuning. ================================== As for the SHiRA-Random, we completely agree with the reviewer that the random mask is in fact a strong baseline (Reviewer AaSE also pointed this out). The main reason we could not include it in the main paper was the space limitation during the initial submission. We wanted to include it in the main paper Table 1 to present HPS scores for SHiRA-Rand. However, that would have also required us to put the generated images of the SHiRA-Rand baseline in the main paper. We really did not have the space to include it at the initial submission time. For the next version of the paper, we will move some of these results from the Appendices to the main paper. About the final but important comment by the reviewer: "_On the contrary, I think that if the author's core argument is centered around the following, I think the insight and contribution of this article will be more meaningful: Even the simplest random gradient mask is an extremely effective means of PEFT. For the ML community, simple and effective methods have always been respected and encouraged, especially for PEFT, a field that is highly relevant to engineering applications._" We completely agree with the reviewer on this statement (as we also mentioned in our rebuttal previously). We have also included it in the new “Discussion” section as specified above. Further, we will modify the abstract, introduction, and conclusion sections accordingly to make it clearer that this is one of the fundamental contributions of our work. We really appreciate the reviewer’s detailed feedback. It has helped improve our work significantly. Please let us know if you have additional concerns. We would be grateful if you could please raise the rating of our work. Thank you.
Summary: The authors propose a new type of adapter, SHiRA, for parameter-efficient fine-tuning. SHiRA selects a part of parameters for update and thus triggers both rapid adapter switching and multi-adapter fusion, while traditional methods like LoRA can't have it both ways. Experiments based on Stable Diffusion and LLaMa show the effectiveness. Strengths: 1. The writing is good and clear. 2. SHiRA significantly reduces the fusion time of LoRA-type adapters. Weaknesses: 1. The technical novelty is limited. SHiRA could be seen as a type of partial fine-tuning works which are already well explored in the past years. This paper just applies similar approach on larger models than previous works, but there is no evidence that previous works like [35,27,3,32,8] can't be used for current large models. 2. Another weakness is the value of the solution. As the authors stated, this paper aims to solve the problem that traditional methods like LoRA can't rapidly fuse weights when we need to switch between multiple adapters frequently. However, the fusion time might be far less than inference time, making the objective less meaningful, In addition, there are not many application scenarios with such requirement. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors are encouraged to further demonstrate the novelty and value of their proposed method in the rebuttal. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and appreciating the strengths of our work. Below we address the concerns listed in weaknesses section: 1. Please refer to response (B) from the common response section. In summary, existing partial finetuning techniques enable gradients for the entire base model weight tensors which makes them impractical for large genAI models. In contrast, with our new SHiRA PEFT implementation, we require 16% lower GPU memory than LoRA for training. 2. Practical Importance: Real-time adapter switching is a highly practical problem and is being heavily considered by the industry for on-device deployment of genAI models. For instance, please see reference [P1] that documents the efforts from one of the leading companies in this direction. Section A (common response) further provides concrete data to support the rapid switching problem for on-device applications. Please refer to section A in common response for more details. We summarize our key innovations as follows: (1) rapid switching; (2) natural multi-adapter fusion properties of sparse adapters - demonstrate superior performance to LoRA multi-adapter fusion across various language and vision tasks; (3) PEFT implementation for sparse finetuning approaches (see section B in common response); (4) show that even the simplest of SHiRA masking techniques from pruning literature (e.g., weight magnitude, etc.) significantly outperform LoRA on many diverse tasks. Hence, even without the rapid switching motivation, with our new PEFT implementation, we have demonstrated that sparse finetuning (SHiRA) can now perform everything that LoRA and its variants can do, with the added deployment benefits. For more details, we request the reviewer to refer response (A), (B) and (C) from the common response section. With these new results, the paper has improved significantly. We would greatly appreciate if the reviewer could increase the rating of our work. [P1] Apple Intelligence Foundation Language Models. https://arxiv.org/pdf/2407.21075 --- Rebuttal Comment 1.1: Comment: Thank you for providing the rebuttal. After reading other reviews and rebuttals, I tend to maintain my initial recommendation.
Summary: Low Rank Adaptation (LoRA) is a crucial technique for fine-tuning LLMs and LVMs. This paper addresses two limitations of LoRA: 1. inference overhead while enabling rapid adapter switching; 2. concept loss with multiple adapters. The paper proposes Sparse High Rank Adapter (SHiRA), which directly trains a small portion of the model’s weights while keeping the other weights frozen. SHiRA has the following advantages compared to LoRA: 1. no inference overhead, 2. rapid adapter switching, and 3. less concept loss. Experiments on LVMs and LLMs validate its performance. Strengths: 1. Efficient fine-tuning techniques for large models are a significant topic in the current field of deep learning. 2. Well structured and easy to follow. 3. The proposed technique is simple and sound. 4. Extensive experiments demonstrate the effectiveness of SHiRA. Weaknesses: My main concern lies in the novelty of the approach. Directly fine-tuning a small subset of parameters is a straightforward idea and should have been widely used even before the introduction of LoRA. I don’t see the unique innovation in the proposed method, which makes its superior performance surprising to me. Technical Quality: 3 Clarity: 3 Questions for Authors: See "Weaknesses". Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors provide a discussion on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the strength and contributions of our work to the domain of parameter efficient finetuning. We are happy to see that the reviewer finds the effectiveness of SHiRA surprising. In fact, this is precisely our point since LoRA is a well-established PEFT method and the simplest and most natural way of efficient finetuning outperforms LoRA and its advanced variants. We wanted to highlight this finding and establish SHiRA as a strong baseline for future adapter methods. We summarize our key innovations as follows: (1) rapid switching; (2) natural multi-adapter fusion properties of sparse adapters - demonstrate superior performance to LoRA multi-adapter fusion across various language and vision tasks; (3) PEFT implementation for sparse finetuning approaches (see section B in common response); (4) show that even the simplest of SHiRA masking techniques from pruning literature (e.g., weight magnitude, etc.) significantly outperform LoRA on many diverse tasks. Hence, even without the rapid switching motivation, with our new PEFT implementation, we have demonstrated that sparse finetuning (SHiRA) can now perform everything that LoRA and its variants can do, with the added deployment benefits. For more details, we request the reviewer to refer response (A), (B) and (C) from the common response section. With these new results and discussions added, we believe our paper has significantly improved in establishing the benefits of SHiRA in the domain of efficient parameter finetuning. We would like to thank the reviewer for all their suggestions. We would greatly appreciate if the reviewer could increase the rating of our work. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Considering all the comments and the author’s replies, I’ll keep the score.
Rebuttal 1: Rebuttal: We thank all reviewers for constructive feedback and for appreciating SHiRA’s many strengths. Overall, reviewers found: (1) SHiRA has significant benefits for mobile/edge deployment due to reduced memory and latency for adapter switching (Reviewers K5NY, 2XqP, ZsNM, Xu4N, AaSE); (2) SHiRA addresses a critical limitation of LoRA by significantly improving multi-adapter fusion (K5NY, 2XqP, AaSE); (3) our extensive experiments validate the effectiveness of SHiRA across a wide range of vision and language tasks (K5NY, 2XqP, XU4N, AaSE); (4) our motivation is intuitive (XU4N) and paper is easy to follow (2XqP, ZsNM, XU4N). Below, we summarize the new experiments and address common concerns related to (A) Key contributions and value of rapid switching; (B) memory costs of SHiRA during training with a new PEFT implementation and novelty w.r.t. existing partial (sparse) finetuning methods; (C) new experiments on GLUE and Image Classification tasks. A. Key Contributions and New Data to Support Rapid Switching (K5NY, ZsNM, AaSE) Key contributions of this work are: (1) rapid switching; (2) natural multi-adapter fusion properties of sparse adapters; (3) PEFT implementation for sparse finetuning approaches (see section B below); (4) show that even the simplest of SHiRA masking techniques significantly outperform LoRA on many diverse tasks. Hence, even without the rapid switching motivation, with our new PEFT implementation, we have demonstrated that sparse finetuning (SHiRA) can now perform everything that LoRA and its variants can do, with the added deployment benefits. Therefore, our contribution must also be seen as establishing sparse finetuning as a strong adapter baseline for future LoRA works. We now provide more data to support rapid switching motivation. While end-to-end inference latency is an important part of the deployment, another essential factor is user experience. Specifically, user experience entails time to get first output which includes switching time (i.e., time to get the adapter inference ready, a critical pre-inference optimization) as well as the inference time. Users want to switch adapters quickly and flexibly on phones; long adapter fusion times severely degrade the user experience and must be minimized (irrespective of the actual inference time). This need for quick switching among adapters is also highlighted in recent popular industry publications and is in heavy use in the real world (e.g., see Apple’s paper [P1]). In Table S4 (rebuttal PDF), we present end-to-end switching times for prevalent LVMs and LLMs: SDXL and LLaMA2. Notably, even for a smaller model like SDXL (2.6B params compared to 7B params in LLaMA2), SHiRA achieves a 4.68x faster switching time (0.77s vs. 3.6s), while for LLaMA2, with larger tensor dimensions, SHiRA attains a 5.71x speedup (4.93s vs. 28.15s) on a consumer grade CPU. Note that, fusing lora adapters for LLaMA2 on a CPU is 28.15s (nearly half a minute). Indeed, waiting half a minute for the adapter to switch/fuse is quite substantial and hampers user experience significantly. In contrast, SHiRA can get the adapter ready for inference within 4.93s, thus significantly improving the user experience. Note that, once the adapters are fused, inference time on the hardware is equal for both LoRA and SHiRA. Moreover, as discussed in reference [1] in our paper, for unfused LoRA case (which can enable rapid switching), the inference latency can be up to 30% higher which is not the case with SHiRA. Finally, this pre-inference user experience optimization has not been a major focus in adapters research. Therefore, another contribution of our work is that we are bringing this new problem into the research community. Such things are very important for practical GenAI deployment. Other similar examples include time to first token for LLM inference, etc. B. PEFT Implementation of SHiRA and Novelty wrt Partial (Sparse) Finetuning (K5NY, 2XqP, ZsNM, Xu4N, AaSE) To address SHiRA’s training memory costs, we created a memory- and latency-efficient PEFT-based implementation for SHiRA. As discussed in Appendix B (submitted paper), scatter_op can be utilized to manage sparse weight updates during inference. Given that SHiRA only finetunes a small subset of the pretrained model weights, we adopt a similar scatter_op-based approach for training. This allows us to retain only the sparse training parameters in the optimizer, thereby significantly reducing the peak GPU memory utilization during training. As shown in Table S3, SHiRA not only trains at almost similar speed as LoRA, but also consumes 16% lower peak GPU memory. Compared to other variants like DoRA, SHiRA training consumes significantly lower peak GPU memory (40%) and trains much faster (SHiRA is about 36% faster than DoRA). All memory data was collected using “psutil” utility used within the “Transformers.Trainer” training loop for LLaMA2-7B. Finally, partial finetuning techniques proposed in the pre-LoRA era [35,27,3,32,8] do not have such memory-efficient implementations, which makes them impractical for large generative models. This is because without the PEFT implementation (e.g., using only gradient masking), the training memory cost is high since it enables gradient for the whole weight tensor (and not just the 1% parameters). Therefore, SHiRA significantly outperforms prior partial finetuning techniques in training memory costs and is highly practical for modern LVM and LLM adaptations tasks. This is a clear difference between SHiRA and prior partial (sparse) finetuning techniques. C. More experiments on Image Classification, GLUE tasks (XU4N) Tables S1, S2 (rebuttal) show new experiments on image classification and GLUE where we again show SHiRA’s superior performance to other low rank methods. Please see our response (point 1) to Reviewer XU4N for more details. [P1] Apple Intelligence Foundation Language Models. https://arxiv.org/pdf/2407.21075 Pdf: /pdf/8b05c10f72f88f5745508be7704db656b673fdb8.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces Sparse High Rank Adapter (SHiRA), a novel method to address the limitations of Low Rank Adaptation (LoRA) in some settings. SHiRA aims to minimize inference overhead, facilitate rapid adapter switching, and reduce concept loss when using multiple adapters. By training only 1-2% of the base model weights, SHiRA maintains high sparsity, enabling efficient on-device deployment. The paper provides both theoretical insights and empirical evidence to support the effectiveness of SHiRA, showcasing its superiority over LoRA in various experiments on large vision and language models. Strengths: 1. SHiRA introduces a novel PEFT method that focuses on high sparsity and selective training of base model weights. 2. SHiRA significantly reduces the memory and latency overhead associated with adapter switching, making it highly suitable for mobile and edge device deployment. Additionally, it will not introduce any inference overhead. 3. By enabling multi-adapter fusion without significant interference, SHiRA addresses a critical limitation of LoRA, thereby preserving the integrity of concurrent adapters. 4. SHiRA effectively resolves a major drawback of LoRA by allowing multiple adapters to function together seamlessly. 5. The method is tested on various large models, including language and vision tasks. 6. The paper provides solid theoretical foundations for the high sparsity and high rank properties of SHiRA. Weaknesses: 1. Lack of baseline: From Table 2, we observe that SHiRA works better than LoRA but perform worse than DoRA. Therefore, I was wondering whether the authors can provide some comparison with DoRA in vision tasks. 2. Lack of end-to-end efficiency analysis: In the Appendix, the authors provide some evidence of the efficiency of sparse adapter. However, it do not provide an end-to-end task switching time for both LoRA, DoRA and SHiRA. I think it will be better to include this result in the main paper. 3. Limited applications: Although this method emphasizes its advantages in rapid adapter switching and multi-adapter fusion, its practical applications remain questionable. As noted in Appendix B, the method primarily accelerates scenarios with a hidden dimension of 8192, potentially reaching the I/O bottleneck. However, for other settings, the fuse operation may not experience significant slowdowns. For the multi-adapter fusion, SHiRA performs better than LoRA-based methods. However, as it still lead to about 4% performance drop, making it difficult to be applied in real-world settings. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and appreciating the strengths of our work. Below we address the concerns listed in weaknesses section: 1) As discussed in our submitted paper, SHiRA is orthogonal to advanced LoRA variants, e.g., DoRA, and can be efficiently combined with them to improve the expressive power of the SHiRA adapter. Table 2 presents these orthogonality results wrt DoRA. Specifically, Table 2 shows that SHiRA-WM when combined with DoRA, improves the performance of the baseline SHiRA-WM by 0.8% accuracy on commonsense benchmarks. Moreover, this SHiRA-WM-DoRA adapter still changes only 1% parameters in the base model. Hence, we see that SHiRA is clearly orthogonal to DoRA and can be applied on top of it while preserving the rapid switching benefits. Note that, the absolute performance of base SHiRA is very close to DoRA (Table 3 of the submitted paper) while bringing the additional deployment efficiencies. Further, we provide qualitative comparison between LoRA, SHiRA, and DoRA on our dreambooth setup. As shown in Fig. S1, SHiRA produces similar quality images as LoRA and DoRA, with an added benefits of adapter switching. As we can see, while DoRA images also look impressive, the SHiRA dog image looks more diverse than both LoRA and DoRA. Moreover, the canvas image for SHiRA has more “canvas” effect while outputs of LoRA and DoRA are smoothed out. 2) We have included the analysis in section A of the common response. We thank the reviewer for their suggestion for this new analysis; we will add this new data in the main paper. 3) We address this concern in three steps: Practical Importance: Real-time adapter switching is a highly practical problem and is being heavily considered by the industry for on-device deployment of genAI models. For instance, please see reference [P1] that documents the efforts from one of the leading companies in this direction. Section A (common response) further provides concrete data to support the rapid switching problem for on-device applications. Speedup in Switching Time: We have plotted the data from Fig. 7 (Appendix B in the submitted paper) in semi-log-y format in Fig. S3. Clearly, we see 16x to 13x improvements in adapter loading using scatter-op for SHiRA. Hence, scatter-op is significantly faster than LoRA fusion even at smaller embedding dimensions. For realistic networks like SDXL and LLaMA2, end-to-end switching times are provided in Table S4 in the rebuttal pdf). Therefore, we still see significant speedup (4.68x-5.71x) in adapter switching for real networks. Multi-Adapter Fusion: As compared to LoRA which suffers 11% degradation upon naïve multi-adapter fusion, SHiRA only degrades 4%. Also, many works [7, 32, 24] in the literature have shown that naïve merging of lora adapters lead to significant concept loss and hence non-trivial postprocessing is required for effective merging. In this work, we show for that naïve SHiRA adapter merging leads to significantly less concept loss and produces better results in both language and vision domains. Finally, note that multi-adapter fusion is a very significant practical usecase. Since SHiRA naturally improves multi-adapter fusion properties, it provides a solution to an important real-world problem. With these new results, the paper has improved significantly. We would greatly appreciate if the reviewer could increase the rating of our work. [P1] Apple Intelligence Foundation Language Models. https://arxiv.org/pdf/2407.21075 --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your rebuttal. I will maintain the score.
null
null
null
null
null
null
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Accept (poster)
Summary: This paper introduces MATES, a method termed "model-aware data selection with data influence models". MATES is designed to optimize data selection for large language model (LLM) pre-training efficiently. The method dynamically considers various influence models during different stages of pre-training, tailored to the current state of the training model. Experiments validating this approach have been conducted using the C4 dataset with Pythia models. Strengths: - This paper reveals an important observation that the influence of data changes throughout the training process. It underscores the necessity for model-aware data selection methods to adapt accordingly. - The utilization of the Gumbel-Top-k algorithm is particularly interesting and well-suited for balancing data quality and diversity. This diversity is crucial as it helps mitigate the independent influence assumption of the training data. - Extensive experiments have been conducted to validate the method's effectiveness. Weaknesses: - **Gap between Motivation and Experiment Design**: While the authors have compared MATES with multiple baselines concerning data selection, there remains a gap regarding the necessity of data selection for pretraining as opposed to full training or no continuous pretraining. - **Full Training**: The results and computational costs of pretraining the entire C4 dataset have not been demonstrated. If the performance significantly drops or the computational cost reduction is minimal, then data selection may not be necessary initially. - **No Continuous Pretraining**: While MATES focuses on pretraining, the chosen model, Pythia, is already pretrained on the Pile dataset (i.e., pretraining a pretrained model). If Pythia already performs adequately on those tasks, then data selection might not be necessary initially. - **Significance of the Results**: The results presented in Table 1 are not significantly robust. For instance, for Pythia-1B, only 5 out of 18 tasks show improvements greater than one **standard error**, suggesting that the p-values are likely higher than 0.05. In some recent similar works [1], the improvement over random selection can be up to seven times the **standard deviation**. Given that MATES also adds computational costs for influence computation, the significance of these results is arguable. - **Missing Implementation Details**: Some critical details about the implementation are not sufficiently detailed, making the presentation somehow confusing. See the Question section. [1] LESS: Selecting Influential Data for Targeted Instruction Tuning, ICML 2024. Technical Quality: 2 Clarity: 2 Questions for Authors: - For results presented in Table 1 and Figure 3, what is the amount of sampled data used for computing $I_m$? How to make the computation of $I_m$ efficient, considering that for each sampled data, one pass of the Adam and recomputation of the reference task loss are required? - In relation to Figure 5(a), how is LASSO regression employed to "separate" influence scores? Does this imply that data points within the same batch can have varying influences? - For Figures 4 and 6, what is implied by "Val. Spearman"? Was the influence computed for the entire dataset? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Besides the limitations discussed by the authors themselves, an additional limitation of MATES is that it uses a single pass of the Adam to approximate the change in loss. This approach assumes that each data point will be trained exactly once. While this assumption might be valid for certain pretraining scenarios, it does not hold for fine-tuning, where all data are used multiple times. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper! We will address your questions/comments below: **Weakness 1:** Comparison with no continuous pretraining and full training **Response:** We first want to clarify that we only followed the Pythia architecture but **didn’t use any of its pretrained weights.** All the pretraining models in the paper are trained from scratch using C4. We chose C4 instead of the Pile since pretraining models on C4 can outperform the Pile, which has also been verified in the recent FineWeb paper. We will make this setup clearer in the next version. For the full training, we followed the original Pythia paper and pretrain the models using 150k steps. As shown in Figure 1 in our supplementary pdf, MATES can improve the pretraining efficiency by more than 3x, given that 50k MATES performance is comparable to or higher than the 150k full data training. Please see the general response 3.1 for more details. **Weakness 2:** Significance of the results **Response:** We hope to clarify that the **standard error** in Table 1 is computed across the accuracy of all the evaluation examples, **not the standard deviation** across different random seeds. Actually, this value is the “stderr” returned by the `lm-evaluation-harness`. It only shows the **variability** of the model’s accuracy across all the examples, **not the stability of each method**. We are aware that using $\pm$ to show this metric is inappropriate and will replace it with parentheses in the next version. LESS computes the standard deviation using different training seeds, while pretraining with different seeds is not feasible with our available computing resources. Our main baselines, DsDm and QuRating, also didn’t report the standard deviation across different seeds. We also notice that LESS and QuRating are papers from the same research group, while QuRating is mainly designed for pretraining data selection and LESS is for instruction tuning data selection. Generally, the average performance gain in pretraining is smaller than that in instruction tuning since pretrained models are typically evaluated on a broader set of benchmarks. Table 1 (both in our original paper and supplementary pdf) demonstrates that MATES **doubles the gain** achieved by the state-of-the-art method, QuRating, with fewer total FLOPs, so our improvement is really significant. Since LESS requires the computation of all training data’s gradients to perform the selection, our compute resources only allow us to run it at the short 40k decay stage (the same setup as Table 2). The results are shown below: | | SciQ | ARC-E | ARC-C | LogiQA | OBQA | BoolQ | HellaSwag | PIQA | WinoGrande | Average | | ----- | -------- | -------- | -------- | -------- | -------- | -------- | --------- | -------- | ---------- | -------- | | LESS | 63.0 | 41.3 | 23.6 | 24.0 | **28.8** | 54.6 | **42.1** | 64.4 | 51.3 | 43.7 | | MATES | **67.3** | **41.7** | **24.7** | **26.9** | **28.8** | **59.6** | 40.1 | **67.6** | **52.1** | **45.4** | This shows that directly applying instruction data selection methods may not work well in the pretraining setup, suggesting that pretraining and instruction selections are two different research scenarios. **Question 1:** The amount of sampled data used for computing oracle scores and the way to make the computation efficient **Response:** The sampled data for computing oracle scores is 80k in the first round and 20k in the following rounds since we continuously fine-tune the data influence model. Even with the 1B model, the wall clock time to compute one oracle score is only 2.5s on one GPU, which means we can get all the required oracle scores during 50k-step pretraining on one node (8 GPUs) **around 12 hours**, which is significantly lower than the actual pretraining time (**4 days+**). To further make the influence computation efficient, we fine-tune a small data influence model to learn the oracle scores and predict the data influence. Even with the additional data selection costs, MATES greatly elevates the scaling curves (Figure 3) w.r.t. the total FLOPs, making this extra computation justifiable. **Question 2:** How is LASSO regression employed to separate influence scores? Does this imply that data points within the same batch can have varying influences? **Response:** Yes, data points within the same batch can have varying influences. We sample different batches to calculate the influence scores, and one data point can appear on multiple batches. Formally, the input of the regressor is a $m$-dimensional binary vector **$v: [v_1, v_2, ..., v_m]$** vector ($m$ means the total probing data pool, as mentioned in Algorithm 1), where $v_i=1$ denotes the inclusion of the $i$-th data in the current batch and vice versa; the output is the **oracle data influence score probed by one-batch one-step training**. We probed 80k input-output pairs that can be directly utilized to train the LASSO regressor, and we can get a $m$-dimensional individual data influence vector from the regressor’s weights. This method is inspired by the linear datamodels in DsDm [3] to separate the individual data influence from the collection of batch data influences. We will add more details about LASSO in the next version. **Question 3:** What is implied by "Val. Spearman"? Was the influence computed for the entire dataset? **Response:** The oracle influence is only computed on a tiny subset of 80k/20k examples (general response 2.2), not the entire. For the “Val. Spearman”, please see the general response 2.3 for more details. **Limitation 1:** Assumption that each data point will be trained exactly once does not apply for the fine-tuning. **Response:** Thank you for pointing out this. We assumed that the pretraining data is infinite w.r.t. current computing resources (line 26-27) and thus unique from other data selection scenarios, while most of the fine-tuning scenarios only have limited supervised examples. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: I appreciate the authors' comprehensive explanation and their efforts to address my previous concerns. The experiments conducted for full training are promising and effectively demonstrate the practical application of MATES. However, I suggest that for Figure 1 in the supplementary materials, it would be more accurate to label it as "Full" instead of "Random." Additionally, I recommend that the authors provide a detailed analysis of the computational overhead associated with data selection, including the training of the data influence model, in comparison to the pre-training costs. This would resolve the concern about the computational costs of this manuscript. While the approach of pretraining Pythia on C4 from scratch seems somewhat unconventional, I acknowledge that this paper opens up a promising direction for exploring how the influence of data evolves during different phases of pretraining. With the addition of the suggested clarifications, I believe this paper will make a valuable contribution to the community. Accordingly, I am inclined to raise my score to 5. --- Rebuttal 2: Title: Code to compute our reported standard error Comment: For your reference (Weakness 2), we provide the relevant code block to compute `stderr` from `lm-evaluation-harness`. ```python def sample_stddev(arr): mu = mean(arr) return math.sqrt(sum([(x - mu) ** 2 for x in arr]) / (len(arr) - 1)) def mean_stderr(arr): return sample_stddev(arr) / math.sqrt(len(arr)) ``` The `arr` in the code is passed with the binary accuracy list across all the examples, where 1 denotes model answers correctly and 0 denotes model answers wrong. The return value of `mean_stderr` only shows the variability of the model’s accuracy across all the examples, not across different random seeds as LESS did. --- Rebuttal 3: Title: Thank you for your response Comment: We appreciate your response and recognition of our work! For Figure 1 in our supplementary pdf, we will change the label name to be “Full”. For the computational overhead incurred by MATES, we reported the total pretraining FLOPs, including all the cost of maintaining the data influence model, in Table 1 in our original paper. Figure 3 also shows the total FLOPs-Acc curves, and MATES greatly elevates the scaling curves (even with additional data selection cost) compared to random selection. However, we agree that it is a good idea to have the detailed breakdown of the computational overhead incurred by MATES, so we provide it in the table below. | Pythia-410M | #FLOPs (x 1e19) | Ratio of the total #FLOPs | | -------------------------------- | --------------- | ------------------------- | | Main model pretraining | 6.35 | 78.3% | | Oracle data influence collection | 0.30 | 3.7% | | Data influence model training | 9e-3 | 0.1% | | Data influence model inference | 1.46 | 18.0% | | Total | 8.11 | 100% | | Pythia-1B | #FLOPs (x 1e19) | Ratio of the total #FLOPs | | -------------------------------- | --------------- | ------------------------- | | Main model pretraining | 17.67 | 88.5% | | Oracle data influence collection | 0.84 | 4.2% | | Data influence model training | 9e-3 | 0.05% | | Data influence model inference | 1.46 | 7.3% | | Total | 19.97 | 100% | The data selection cost of MATES only accounts for 21.8% (Pythia-410M) and 11.6% (Pythia-1B) of the total pretraining FLOPs, compared to the state-of-the-art method, QuRating, whose selection cost accounts for 75.9% (Pythia-410M) and 53.1% (Pythia-1B). The selection cost ratio of larger models is generally smaller since their pretraining cost dominates the total FLOPs while the training and the inference cost of our data influence model remains stable. The inference speed can also be improved with a fast-inference framework like vLLM [1]. We will include this new breakdown analysis in the next version of our paper. For the C4 dataset, we basically followed DsDm’s open-source version [2] to facilitate a fair and reproducible comparison. We also verified the effectiveness of MATES in the recent FineWeb [3] dataset (please refer to our general response 3.3), where MATES still demonstrates superior performance compared to the state-of-the-art data selection methods, such as the FineWeb-Edu classifier [3] and the fasttext-oh-eli5 classifier [4]. We will add this result in the next version. Thank you once again for your insightful reviews. Please do not hesitate to reach out if any part of our response requires further clarification. [1]: Kwon, Woosuk, et al. "Efficient memory management for large language model serving with pagedattention." Proceedings of the 29th Symposium on Operating Systems Principles. 2023. [2]: Engstrom, Logan, Axel Feldmann, and Aleksander Madry. "Dsdm: Model-aware dataset selection with datamodels." arXiv preprint arXiv:2401.12926 (2024). [3]: Penedo, Guilherme, et al. "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale." arXiv preprint arXiv:2406.17557 (2024). [4]: Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models." arXiv preprint arXiv:2406.11794 (2024).
Summary: This paper introduces model-aware data selection with data influence models (MATES) that selects high-quality data for pre-training large language models (LLMs). MATES addresses the issue that existing data selection methods don’t adapt to evolving data preferences during pre-training. MATES continuously adapts to the main model’s evolving data preferences by uses a small data influence model to select the most effective pre-training data for the next stage. The data influence model also reduces the cost of computing the influence scores on-the-fly. Experiments on Pythia and the C4 dataset show that MATES outperforms random data selection and existing static data selection approaches. Further analysis validates the ever-changing data preferences of the pre-training models and the effectiveness of our data influence models to capture it. Strengths: 1. The proposed method outperforms existing static data selection baselines on the Pythia-410 model. Compared to random selection, it can achieve a certain accuracy with less GPU FLOPS. 2. The authors demonstrate the importance of adapting to fluctuating data influence in data selection methods. Weaknesses: 1. The authors only compare MATES to other static baseline methods on the Pythia-410M model. For most of the rest experiments, the authors only compare to random selection. 2. Missing references. There are existing dynamic data pruning methods [1][2] that the authors may want to include in the discussion of related works. These works also noticed the influence of data points changes as the training proceeds. [1] Raju, Ravi S., Kyle Daruwalla, and Mikko Lipasti. "Accelerating deep learning with dynamic data pruning." arXiv preprint arXiv:2111.12621 (2021). [2] Qin, Ziheng, et al. "InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning." The Twelfth International Conference on Learning Representations. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The authors only compare against a full set of baseline methods in the Pythia-410M experiments. In the Pythia-1B experiments, the authors only compare against the random selection baseline. Why are the results of other baselines omitted in the Pythia-1B experiment? How do other baseline methods such as SemDedup perform in the Pythia 1B experiments? 2. How does the proposed method MATES perform compared to using the full dataset for training? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors adequately discuss the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper! We will address your questions/comments below: **Weakness & Question 1**: Only comparing MATES to other static baseline methods on the Pythia-410M model. For most of the experiments, the authors only compare random selection. **Response**: We acknowledge the reviewer’s concern regarding the limited comparison of MATES to other baselines, specifically for larger-scale models. It is important to note that large-scale experiments are costly and often infeasible for many university research groups. In research or even the productive large language models like T5 [1] and Llama 3.1 [2], it’s a common practice to conduct comprehensive ablations at a smaller scale and then test the final performance at large scales. With more resources, we are able to show all the 1B baselines in Table 1 in our supplementary pdf, where MATES still performs the best among all the baselines. This demonstrates our commitment to validating our approach at larger scales within our resource constraints, and we believe our results can provide valuable insights into MATES’s effectiveness across different model scales. Please see the general response 3.2 for more details. **Weakness 2**: Missing references. **Response**: Thanks for the references. Our work builds upon the growing emphasis on dynamic data selection for efficient model training. As with Raju et al. [3] and Qin. et al. [4], we share the common motivation of adapting the data selection as training progresses, as the value of training samples may change throughout the learning process. Unlike Raju et al.’s ε-greedy and upper confidence bound (UCB) approach, MATES uses a dedicated data influence model to capture the complex and changing relationships between training data and model performance. As a result, MATES is able to model more nuanced data preferences than methods based on simpler heuristics like the sum of mean and variance. Qin et al.’s InfoBatch maintains unbiased gradient expectations through gradient rescaling on the instruction fine-tuning setup. In MATES, we employ local probing to collect oracle data influence, providing a more direct measure of sample importance compared to InfoBatch’s loss-based soft pruning policy, allowing for a more accurate selection in the complex pretraining settings. We will cover these related works carefully in the next version. **Question 2**: How does the proposed method MATES perform compared to using the full dataset for training? **Response**: For the full training, we follow the original Pythia paper and pretrain the models with 150k steps. As shown in Figure 1 in our supplementary pdf, MATES can improve the pretraining efficiency by more than 3x, given that 50k MATES performance is comparable to or higher than the 150k full data training. Please see the general response 3.1 for more details. [1]: Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." Journal of machine learning research 21.140 (2020): 1-67. [2]: Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models" arXiv preprint arXiv:2407.21783 (2024). [3] Raju, Ravi S., Kyle Daruwalla, and Mikko Lipasti. "Accelerating deep learning with dynamic data pruning." arXiv preprint arXiv:2111.12621 (2021). [4] Qin, Ziheng, et al. "InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning." The Twelfth International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for conducting additional experiments and providing the response. The response answers my question about missing references and full dataset training. However, according to the additional 1B zero-shot results provided in Table 1 of the newly uploaded pdf, the proposed MATES wins the best baselines on 3 out of the 10 datasets, draws with the best baselines on 2 out of the 10 datasets, and loses to the best baselines on 5 out of the 10 datasets. This seems to be worse compared to the 410M zero-shot results (wins on 7 / 10, loses on 3 / 10). Does this suggest the advantage of MATES degrades as the model size increases? I will adjust my score if this question is clear. --- Rebuttal 2: Comment: Thank you for your detailed review and follow-up questions. We believe the reduced margin on 1B is because our selection ratio and candidate data pool are not optimal for the 1B setting. We don't have enough compute resources to intensively tune these hyperparameters since our work is at an exploratory scale, as mentioned in our limitations section. For the selection ratio, we **fix it to 20%** in our main table. However, if we vary it from **20% to 10%** for 1B at the 50k decay stage, we can observe some gains (also shown in Figure 3a in our supplementary pdf): | | SciQ | ARC-E | ARC-C | LogiQA | OBQA | BoolQ | HellaSwag | PIQA | WinoGrande | Average | | :--- | :------- | :------- | :------- | :------- | :------- | :------- | :-------- | :------- | :--------- | :------- | | 10% | **67.8** | 44.4 | 25.5 | **28.9** | **32.6** | **60.9** | **47.4** | **70.5** | **52.4** | **47.8** | | 20% | 67.3 | **44.9** | **25.9** | 28.7 | 32.2 | **60.9** | 45.3 | 69.5 | **52.4** | 47.5 | With the 10% ratio, MATES (Pythia-1B) can have a 6/0/4 #Win/#Tie/#Loss over the best baselines. This indicates that we still have headroom to improve MATES with a more suitable selection ratio in the full training run (not only at the decay stage due to the rebuttal time limit). For the selection pool, we **fix it to 125B tokens for all models**, but we believe a larger pool will generally benefit 1B models. Recent DataComp-LM [1] also shows that the improvement trends observed on a smaller scale (410M) align with the larger scale (1B and 7B) **when the larger models have a larger selection pool (e.g., 15T tokens at most)**. We plan to verify the effectiveness of MATES on this standard DataComp-LM benchmark and will update the results with a more suitable ratio and pool in the next version. Additionally, for your reference, we report the #Win/#Tie/#Loss of all our baseline methods in their original papers: | Method | Baselines | #Win/#Tie/#Loss over the best baseline as reported in their papers | | -------- | ---------------------------------- | ------------------------------------------------------------ | | DSIR | Random, Manual, Heuristics | 4/0/5 | | SemDeDup | Random, NearDup | 2/0/1 | | DsDm | Random, Classifier, DSIR, SemDeDup | 5/1/9 | | QuRating | Random, DSIR | 5/0/6; If we only compare MATES with Random and DSIR, the #Win/#Tie/#Loss is 8/1/1 | Note that all these baseline papers have already been accepted at the top-tier conferences. This underscores the point that it is reasonable for an effective method to not outperform the best baseline in every task but rather to achieve a solid average performance gain across tasks. We recognize that each method has its unique strengths. For instance, QuRating is specifically optimized for educational data and thus is likely to significantly elevate the model performance on the knowledge QA tasks. On the other hand, MATES stands out due to achieving the highest overall performance while utilizing few additional FLOPs. Thank you once again for raising this question. Please do not hesitate to reach out if any part of our response requires further clarification. [1]: Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models." arXiv preprint arXiv:2406.11794 (2024). --- Rebuttal Comment 2.1: Comment: I appreciated the authors' response. I will increase the score.
Summary: It is important to carefully select data during the pretraining stage, as the pretraining data are often obtained from web crawling and can be extensive and noisy. Existing methods for data selection include heuristic-based approaches, clustering-based techniques, and the use of influence models. However, these methods result in static selected datasets and do not adapt to the evolving training process. This paper introduces a novel approach by developing an influence model that co-evolves with the main pretraining model. The selected data is no longer static; now, it is dynamic and sensitive to the model. The study demonstrates that continuously adapting data preferences enables efficient training, and the influence model is both small and accurate. Empirical evidence presented in the framework, MATES, proves its better performance compared to other methods, showcasing advantages in both performance and efficiency. Strengths: * Originality: This paper introduces a new data selection framework in the pretraining stage by learning a smaller data influence model and co-evolving with the main model's data preference. * Quality: Empirically, their results show promise to enable efficient and improved pretraining. * Clarity: Their writing and their result are clear to me. * Significance: Data curation is crucial to denoise massive pretraining datasets. This paper presents a new approach to selecting data points by learning model preferences. Their empirical results support their claim, and I believe their notion is significant to the field. Weaknesses: 1. In Figure 2, it appears that the data influence model needs to be reinitialized at each iteration. It's not clear why we need to reset it instead of continuing to learn with the main model. 2. It's unclear how the influence model includes and excludes data points and converts them into labels for learning the influence score. Could the authors clarify this? Additionally, what is the performance of the influence model across training epochs and how do we evaluate it? I believe there is an imbalance in the labels. 3. Although the learning curve indicates efficient and improved performance, it's uncertain how much computational burden is involved in maintaining the trained influence model. It would be helpful to include these details. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. It's an interesting paper, and I am curious about the changes in data preference. Can the authors provide some examples of what the model likes to learn at the beginning but discards later, or vice versa? 2. This is not critical: but looking forward to seeing this approach being employed in other data curation benchmarks, such as DataComp. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I didn't see any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper! We will address your questions/comments below: **Weakness 1**: The initialization of the data influence model. **Response**: We only initialize our data influence model with pretrained BERT at 10k steps and continuously fine-tune it at the following steps. Figure 6a shows that initializing the data influence model with pretrained weights still works well with enough training data (80k), while continuous fine-tuning demands less training data to achieve the same performance, saving the oracle probing cost by 75% (80k -> 20k). **Weakness 2**: Unclear how the influence model includes and excludes data points and converts them into labels for learning the influence score. **Response**: The oracle data influence score is defined as the difference in the target loss before and after one-step training on a hold-out example $x_i$ (Equation 4). Please see the general response 2.2 for more details. **Weakness 3**: What is the performance of the influence model across training epochs, and how do we evaluate it? Is there an imbalance in the labels? **Response**: We already reported the performance of data influence models during pretraining in Figure 4. Please see the general response 2.3 for more details. The oracle data influence distribution conforms nearly to the normal distribution, so there is no imbalance in the labels (Figure 3b in our supplementary pdf). As training steps increase, our data influence model can better capture the data influence of the main model with increasing validation performance. **Weakness 4**: The computational burden in maintaining the trained influence model. **Response**: As stated in lines 198-199, we reported the total pretraining FLOPs, including the cost of maintaining the data influence model, in Table 1. The actual additional data selection cost incurred by the MATES is (8.11 - 6.35) * 10^19 = **1.76 * 10^19** FLOPs for the 410M model and (19.97 - 17.67) * 10^19 = **2.3 * 10^19** FLOPs for the 1B model, which is near 9x lower than the selection cost of the state-of-the-art data selection method QuRating (**20 * 10^19** FLOPs). Figure 3 also reports the FLOPs-ACC curves and MATES greatly elevates the scaling curves (even with additional data selection cost) compared to random selection. **Question 1**: Some examples of what the model likes to learn at the beginning but discards later, or vice versa? **Response**: We would like to present a table of the high-influence cases on a hold-out 500k subset whose rank decreases a lot across the training. | Selected Data Rank (Lower means higher Influence) | Source | Text | | -------------------------------------------------- | --------------------- | ------------------------------------------------------------ | | 0 in 10k checkpoint; 6376 in 20k checkpoint | Blog | She went to the place where Jonathan lay and gave to his servant's David's richest garment to be placed next to him as he lay crying out in his sickness. She went in and out of the house. She went in and out of the city gates. She waited for David in the place | | 1 in 20k checkpoint; 22998 in 30k checkpoint | Wikipedia | Two weeks later, Friedman threw three touchdown passes in a 27–0 victory over Northwestern. One of Michigan's touchdowns was set up when Friedman intercepted a Northwestern pass and returned it 13 yards. | | 3 in 30k checkpoint; 452 in 40k checkpoint | Slides in CRITHINKEDU | Critical Thinking Across the European Higher Education Curricula), Education and Regional Development in Southern Europe: Should we invest in Critical Thinking across the Higher Education Curricula? First… What is Critical Thinking (CT)? CT is not only a high quality way of thinking (skill), but also a way of being (disposition). | | 1 in 40k checkpoint; 5954 in 10k checkpoint | forkliftcertification | It’s a group of training course resources to help you master telescopic forklifts in record time. Or else you’re taking the course and throwing a bunch of forklift telescopic training against a wall and hoping something sticks. And forklift is only getting more popular. This chapter is about handler course and certification. | We find that the model at the early pretraining stage (10k steps) prefers to learn the natural narrative w/o too much specific knowledge. In the 20k steps, the model focuses more on factual knowledge (e.g., the page from Wikipedia). In the 30k checkpoint, the model prefers more academic text, such as the official teaching slides. In 40k steps, the model turns to more long-tail knowledge like Telescopic forklift Training. This study shows that the model’s preferences for the data continuously evolve during the pretraining. We will add this analysis in the next version of the paper. **Question 2**: Employ MATES on other data curation benchmarks, such as DataComp. **Response**: We agree that DataComp [1] is a good future experiment. In another recently released state-of-the-art pretraining data FineWeb [2] (Figure 2 in our supplementary pdf), MATES still demonstrates better average performance compared with the strong FineWeb-Edu classifier (similar to QuRating but using annotations generated by LLama3-70B-Instruct) and fasttext-oh-eli5 classifier (the best-performing classifier to mine instruction-like data in the DataComp paper). This result gives us confidence that MATES can contribute to the top-performing runs on the DataComp leaderboard under its standardized data selection setup. [1]: Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models." arXiv preprint arXiv:2406.11794 (2024). [2]: Penedo, Guilherme, et al. "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale." arXiv preprint arXiv:2406.17557 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for providing additional experiments and clearing my concerns about initialization, evaluation, and computation. Results in your attachment look promising. Thanks for providing examples that evolve with model preference. They are interesting and love to see more follow-up exploration. Overall, I think this paper is interesting and shows good empirical results, but some places lack detailed descriptions. I will increase my score to 5. Thanks for your response! --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and recognition of our work! We will clarify the experimental setup and add detailed descriptions about our method and results in the next version.
Summary: The paper proposes a new method “MATES,” which aims to select pretraining data using a reference high-quality data source. MATES uses an estimated influence function to iteratively select the most influential datapoints. Experiments on the Pythia model + dataset show good promise over existing data curation methodologies. Strengths: - Better trained models on MATES’ curated data vs. other data curation approaches - The distillation of influence scores into a smaller proxy model (BERT-base in the paper) allows MATES to perform iterative, model-state dependent data selection vs. other baselines which perform one-step, model-state-independent data selection. - Thorough experiments: various experiments and ablations are included, e.g., various relevant baseline comparisons, ablating the effect of various hyper-parameters in MATES, etc. Weaknesses: - I found the paper writing to be quite loose and missing context in various places. I had to read a lot of the text multiple times as well as filling in the missing context myself. - While I appreciate the extensive experiments covered in the paper, I feel the paper misses a few important results and/or ablations. See the questions section for more on this. - The choice of evaluation benchmarks doesn’t seem obvious and feels quite debatable to me. See the questions section for more on this. Technical Quality: 3 Clarity: 2 Questions for Authors: While I completely understand the cost of experiments included in the paper, I believe a few critical questions/extra experiments might greatly improve the paper. Please feel free to ignore any of the following questions in the interest of compute. - I couldn’t find details on which dataset was used. While the abstract mentions C4, the main text doesn’t specify this, making me question if the Pythia dataset was used instead? - The above question also makes way for a deeper question: since the selection ratio is fixed to 20% (of the entire dataset, I assume), this translates to how many *tokens sampled*? Are the training runs multi-epoch or is the sampled data greater than the training budget? - If the runs are multi-epoch, what are the results in the full-data (no sampling) case? - How does MATES compare to other approaches when the sampling rate is not equal to 20%? Is MATES also useful in the super low/high-sampling rate regime? - What if a different target dataset is used vs. only using lambada? Can we use a mixture of target datasets? How does the design of the target dataset/mixture affect different evals? - Finally, the evaluation benchmarks used in the paper seem to be cherry-picked based on the “high-level” proximity to the kind of data present in lambada. The paper would greatly benefit from using more diverse evaluation scenarios like coding, math, knowledge, etc. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors address the major technical limitations of their work in Section 6. No obvious negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our paper! We will address your questions/comments below: **Weakness 1:** Loose writing and missing context. **Response:** Thank you for pointing it out! We will carefully revise the paper from the following perspectives: 1. Clarify the experimental setup, as mentioned in general response 2 2. Add more theoretical derivation from the classic influence function formulae [1] to the representation of our oracle data influence 3. Avoid grammar errors and missing context and add more detailed explanations for both our method and results **Question 1:** Pretraining dataset used. **Response**: We use the C4 dataset (the same as DsDm) only to pretrain all our main models from scratch. We will clarify it in the next version of the paper. **Question 2:** Can the selection ratio translate to the number of tokens? Are the training runs multi-epoch, or is the sampled data greater than the training budget? **Response:** The ratio can be translated into the number of tokens. In every 10k steps, we update the data influence model and select the data from separate pools randomly sampled from the entire C4. Each data selection pool is 5x larger than the training data of 10k steps, so we only select 20% from it. The whole pretraining data pool (C4) is huge, so we run one epoch training as most previous work (DsDm, QuRating) did. **Question 3:** Full-data results? **Response:** For the full training, we follow the original Pythia paper and pretrain the models with 150k steps. As shown in Figure 1 in our supplementary pdf, MATES can improve the pretraining efficiency by more than 3x, given that 50k MATES performance is comparable to or higher than the 150k full data training. Please see the general response 3.1 for more details. **Question 4:** How does MATES compare to other approaches when the sampling rate is not equal to 20%? Is MATES also useful in the super low/high-sampling rate regime? **Response:** Interesting question! We only have time to provide ablation results of different selection ratios in Figure 3a in our supplementary pdf, in which MATES shows consistent gains compared to random selection using either low/high-selection ratio, ranging from 1/200 to 1/2. We also find the optimal sampling rate of a larger model (1B) is smaller than that of a smaller model (410M). However, too low (1/200) or high selection ratio (1/2) will decrease the performance, as a low ratio may harm the diversity, and a high ratio does not leverage the strength of the data influence enough. For your reference, this is the adopted selection ratio from our baseline papers. Our optimal selection ratio is close to QuRating/DsDm, which pretrains a model of a similar size as ours. | Methods | Max Model Size | Selection Ratio | | ----------- | ---------------- | --------------- | | DSIR | 110M | 3.2% | | SemDeDup | 6.7B | 63% | | QuRating | 1.3B | 11.5% | | DsDm | 1.3B | 20% | | DataComp-LM | 6.9B | 10% | **Question 5:** A different or a mixture of target datasets? How does the design of the target dataset/mixture affect different evals? **Response:** Thanks for your suggestion. We use FLAN (the mixture of multiple NLP tasks) as the target dataset to run our MATES on the recently released FineWeb datasets [2]. As shown in Figure 2a in our supplementary pdf, MATES still demonstrates superior average downstream performance on FineWeb compared to the state-of-the-art data selection methods, such as the FineWeb-Edu classifier and the fasttext-oh-eli5 classifier. Compared with taking LAMBADA as the target task, using FLAN can improve the model’s performance on both knowledge QA and commonsense tasks (Figure 2c, Figure 2d). We hypothesize that multi-task data (e.g., FLAN) has better coverage of the language task types and thus is more suitable for serving as the target task. Please see the general response 3.3 for more details. **Question 6:** Choice of the evaluation tasks **Response:** We followed the original Pythia paper and our main baseline QuRating for the evaluation benchmark selection and didn’t cherry-pick any tasks that resemble LAMBADA. LAMBADA is essentially a word prediction task and greatly differs from our QA-like evaluation tasks. It is found to be a great reference task due to its high correlation (0.877) with the average downstream accuracy (Figure 5b) and the high data influence modeling (>0.7) performance (Figure 6a). We agree that coding, math, and knowledge are essential to evaluate a language model’s capabilities. For the knowledge part, actually most of our adopted QA tasks (SciQ, ARC-E, ARC-C, OBQA) require factual and commonsense knowledge to solve. In our supplementary pdf, we also evaluate MMLU [3] in Figure 2b. On MMLU, MATES can outperform random selection but not the FineWeb-Edu classifier. As mentioned in general response 3.3, the FineWeb-Edu classifier is optimized towards educational values derived from LLama3-70B-Instruct, which can benefit knowledge-intensive tasks like MMLU with the price of losing generalization abilities to other types of evaluation tasks. For the coding and math tasks, we didn’t include them since they are not commonly used to evaluate pretrained checkpoints w/o additional instruction tuning. A supportive evidence is that even the Llama 3 series [4] **didn’t choose to evaluate pretrained checkpoints** on HumanEval (coding) and MATH (math) due to the non-indicative performance. [1]: Koh, Pang Wei, and Percy Liang. "Understanding black-box predictions via influence functions." Proceedings of the 34th ICML-Volume 70. 2017. [2]: Penedo, Guilherme, et al. "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale." arXiv preprint arXiv:2406.17557 (2024). [3]: Hendrycks, Dan, et al. "Measuring massive multitask language understanding." arXiv preprint arXiv:2009.03300 (2020). [4]: Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models" arXiv preprint arXiv:2407.21783 (2024). --- Rebuttal 2: Title: Looking Forward to Your Reply Comment: Dear Reviewer 2Zbt, We have carefully addressed your feedback in our rebuttals and provided detailed responses to each of your comments. We believe these clarifications will help in assessing our work more comprehensively. We would greatly appreciate it if you could review our rebuttals and provide any further feedback, given that the author-reviewer discussion will be closed on Aug. 13 at 11:59 p.m. AoE in no more than one day. We are willing to answer any further questions. Thank you for your time and consideration. We look forward to your reply. Best, The Authors
Rebuttal 1: Rebuttal: ## 0 Overview First, we thank all the reviewers for their great efforts and insightful feedback. In this post, we summarize positive points from the reviews, clarify the experimental setup, and address the shared questions proposed by the reviewers with additional experiments to support and strengthen our findings. ## 1 Positive points We sincerely thank all the reviewers for recognizing our key contributions. We appreciate the acknowledgment of our novel data selection framework MATES, which learns a smaller data influence model that co-evolves with the main model’s data preference to perform iterative, model-state dependent data selection ( `2Zbt`, `xrwH`). Furthermore, we are grateful for their recognition of our method’s superior performance: MATES underscores the necessity for model-aware data selection in pretraining, outperforming static baselines with fewer GPU FLOPs (`2Zbt`, `xrwH`, `T2sB`, `R3ZA`). We are also pleased that our extensive and varied experiments and ablations were noted for validating the method’s effectiveness (`2Zbt`, `R3ZA`). ## 2 Clarification of the setup We summarize the most important clarifications and discuss each in detail in the following points: 1. **The pretraining dataset used in this work (`2Zbt` and `R3ZA`):** We use the **C4** dataset only to pretrain all our main models from **scratch.** 1. **The training data of the data influence model (`xrwH` and `R3ZA`):** The oracle data influence score is defined as **the difference in the target loss** **before and after one-step training** (i.e., $\mathcal{L}(\mathcal{D_t} \mid \mathcal{A}(\mathcal{M}, x\mid_{i \to -1}))$ and $\mathcal{L}(\mathcal{D_t} \mid \mathcal{A}(\mathcal{M}, x\mid_{i \to +1}))$ in Equation 4) using the Adam optimizer on a hold-out example $x_i$. Then, we construct the mapping of the data point to its influence score $(x_i, \mathcal{I}_\mathcal{M}(x_i;\mathcal{D}_t))$ (line 148) as the training data for our BERT-based data influence model. The size of sampled data for computing data influence scores (i.e., $m$ in Algorithm 1) is **80k** at 10k steps and **20k** at 20k, 30k, 40k, 50k steps. 1. **The evaluation of the data influence model (`xrwH` and `R3ZA`):** We evaluate our data influence model’s performance by its prediction’s Spearman correlation with the oracle data influence on the **10% hold-out validation set** sampled from the collected data influence training set {$(x_i, \mathcal{I}_\mathcal{M}(x_i;\mathcal{D}_t)) \mid x_i \in S_m$} (Algorithm 1). Spearman correlation is an intermediate metric and a higher correlation denotes the better learning of our data influence models that approximate the oracle scores (Figure 4a). Our ultimate evaluation of the data influence model is still the downstream performance of the main model pretrained with the selected data (Figure 4b). We will clarify all of those in the next version of the paper. ## 3 Additional experiments We summarize observations of the additional experiments we have provided in the supplementary pdf below: 1. **Pythia-410M/1B full data training run (`2Zbt`, `T2sB`, and `R3ZA`), in Figure 1:** We provide the full data training experiments (following the original Pythia pretraining steps of 150k) of Pythia-410M/1B in Figure 1. For the full data training curve, we notice that the average downstream performance increases very slowly, while MATES significantly elevates the model performance in the early pretraining steps. MATES can improve the pretraining efficiency by more than 3x, given that 50k MATES performance is comparable to or higher than the 150k full data training. 2. **All baselines in the Pythia-1B (`T2sB`), in Table 1:** We are able to secure more compute resources to run all the 1B baseline results and present those in Table 1. Among all the baselines, MATES still performs the best in both zero-shot and two-shot performances. We would like to highlight that zero-shot evaluation is the more important and robust one to reflect a model's pretraining abilities, while two-shot evaluation depends on the in-context examples used, which can be less stable. 4. **MATES on FineWeb [1] with LAMBADA or FLAN as target tasks (`2Zbt`), in Figure 2:** To verify MATES’s effectiveness on a more advanced dataset, we use LAMBADA or FLAN [2] (the mixture of multiple NLP tasks) as the target dataset to run MATES on FineWeb. FineWeb is the state-of-the-art open-source pretraining dataset released by HuggingFace recently. It is processed with complicated and delicate data curation steps. We also include two new baselines in the FineWeb comparison: one is the FineWeb-Edu classifier that is similar to QuRating and the LLM quality filter in LLama3.1, using educational scores generated by LLama3-70B-Instruct; the other is fasttext-oh-eli5 classifier that mines instruction-like data from DataComp corpus and achieves the best selection results in the DataComp paper [3]. As shown in Figure 2a, MATES demonstrates superior average downstream performance on FineWeb compared to these state-of-the-art data selection methods. Using multi-task FLAN as the target can make MATES generalize better to downstream tasks than using LAMBADA only. In contrast, the FineWeb-Edu classifier overfits the knowledge QA tasks (Figure 2c) but underperforms in commonsense tasks (Figure 2d), as the scores it assigns to data are influenced by human-written criteria that may disproportionately favor certain downstream capabilities. We will add and analyze all of those in the next version of the paper. [1]: Penedo, Guilherme, et al. "The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale." arXiv preprint arXiv:2406.17557 (2024). [2]: Chung, Hyung Won, et al. "Scaling instruction-finetuned language models." Journal of Machine Learning Research 25.70 (2024): 1-53. [3]: Li, Jeffrey, et al. "DataComp-LM: In search of the next generation of training sets for language models." arXiv preprint arXiv:2406.11794 (2024). Pdf: /pdf/076319cc1fe251de25d1eae26a684365dace318b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scalable DP-SGD: Shuffling vs. Poisson Subsampling
Accept (poster)
Summary: DP-SGD is one of the must common algorithms currently deployed for performing machine learning tasks while maintaining differential privacy. As the "S" in its name reminds us, the gradient is not computed over the full dataset at each turn, but instead over a subset sampled from it. The commonly used privacy analysis typically relies on Poisson sub-sampling, whereby each element is independently added to each batch with some constant probability. In contrast, due to technical challenges stemming from the fact the Poisson sub-sampled batch size is not constant, this is typically implemented by first shuffling the full dataset, then batching it into subsequent subsets. Unfortunately, there is a gap between existing upper bounds on the privacy loss of the shuffle version and those of the sub-sampling. Recently Lynn et al. [1] provided a lower bound on the privacy of the shuffled version, which shows a clear gap between the two settings in some parameter regimes (mainly low local privacy). This paper continues and extends [1] work in two ways: 1. It extends the analysis to multi-epoch setting 2. It proposes a variant of Poisson sub-sampling with effective constant batch size, by paying a small price in the privacy failure probability $\delta$ The authors also perform several empirical experiments to compare the privacy and accuracy of various versions of private training. [1] Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, and Chiyuan Zhang. How Private is DP-SGD? In International Conference on Machine Learning, ICML (to appear), 2024. Strengths: This paper is clearly written, and adds to our understanding of the toolkit one can use when attempting to learn privately. The proposed truncated sub-sampling might prove a useful alternative to the existing commonly used shuffle method. Weaknesses: While this work is clearly written and provides useful empirical and theoretical tools, its novelty is somewhat limited. As mentioned, this is a follow-up work to the one by Lynn et al. The extension of their analysis to multi epochs is relatively straight forward, and so is the analysis of the truncated Poisson sub-sampling. Technical Quality: 4 Clarity: 4 Questions for Authors: Can the authors add a comparison of the the various privacy analyses to the upper bounds provided by existing literature on general shuffle model, such as the one provided by Feldman et al. [2]? Additionally, to the best of my understanding, amplification by sub-sampling a constant size batches without replacement (per sampling) is will understood, and enjoys privacy guarantees that are comparable to those of Poisson sub-sampling, up to a factor 2, depending on the exact neighboring notion. What is the advantage of the proposed method relative to the WOR sampling option? [2] Feldman, V., McMillan, A., and Talwar, K. Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling. In FOCS, pp. 954–964, 2021 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. > add a comparison of the the various privacy analyses to the upper bounds provided by existing literature on general shuffle model While the amplification bounds for shuffling such as by Feldman et al. provide upper bounds, we did not consider it because * our focus in this work was primarily to compare _lower bounds_ for Shuffle DP-SGD with _upper bounds_ for Poisson DP-SGD, and * in the regime of small noise multipliers, where we see that our lower bound on the noise multiplier of Shuffle DP-SGD is quite close to that of Deterministic (no amplification), these upper bounds on amplification by shuffling have to necessarily be near-vacuous (cannot be much better than deterministic). Nevertheless, this is a valuable point, and we will include at least the noise multipliers obtained via these amplification upper bounds for comparison in a future revision. > sampling without replacement (Similar point raised by Reviewer 1Uvo) Indeed, sampling without replacement (i.e., sampling fixed-sizes batches uniformly and independently at random) can also be implemented using the massively parallel computation approach we propose. We did not however consider it in our evaluation however since the privacy guarantees of that are worse than that of Poisson subsampling. Similar to your point, as noted in this recent work of [Lebeda et al.](https://arxiv.org/abs/2405.20769), the noise scale required for this method is twice that required for Poisson subsampling. We will revise the paper to include a discussion of this alternative approach. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. As previously mentioned, the comparison of lower bounds for the shuffle model to upper bounds for the Poisson sub-sampling model were already performed by Chia et al. One of the main contributions of this current work as stated by the authors, is the proposal of the truncated Poisson sub-sampling model, to remedy the existing gap in private SGD based training. I find it hard to asses the importance of this contribution, where the upper bounds for two existing alternatives (shuffle and constant size sub-sampling) where not considered, and might very well provide accuracy results that are comparable to the new proposed technique. Additionally, I would recommend the authors better clarify in the notations which lines correspond to upper bounds and which ones to lower bounds. This seemed to cause confusion for some of the reviewers, and might affect the readers as well. In fact, the comparison discussed in line 284 creates the impression that the two compared lines represent the same quantity while - to the best of my understanding - they do not. --- Reply to Comment 1.1.1: Comment: > better clarify in the notations which lines correspond to upper bounds and which ones to lower bounds Just to clarify, the noise multiplier for $\cal D$ is essentially exact, the noise multipliers for $\cal P$ (as well as ${\cal S}^{\circlearrowright}$ with $\cal P$ accounting) correspond to upper bounds, and the noise multipliers for ${\cal S}^{\circlearrowright}$ and ${\cal S}^\diamond$ are lower bounds. Thanks for the feedback, we will incorporate this in the revision.
Summary: This paper provides theoretical and empirical analyses of three different DP-SGD minibatch sampling schemes, and also implement an efficient beam pipeline for using Poisson sampling in practice via a truncation-based approach. Theoretically, they derive new lower bounds for the privacy accounting / noise calibration of shuffling-based DP-SGD, and truncated Poisson-sampling-based DP-SGD. Empirically, they show (1) for the same noise multiplier, shuffling does better (in terms of AUC) than Poisson sampling and (2) for the same privacy budget, Poisson sampling does better, mainly due to the more favorable privacy accounting. In one experiment (Fig 3 upper left) they show that shuffling can sometimes be better than Poisson sampling for small epsilon. Strengths: * Addresses an important gap in many works on DP-SGD in a principled manner. * Clearly demonstrates the benefits of Poisson sampling over shuffling with correct / tight privacy accounting. * Shows how to implement Poisson sampling within the typical constraints of an ML pipeline via truncation + padding + offline beam job to order the examples. This is crucial to get the best privacy / utility trade-offs and formal privacy guarantees in production DP applications. The shuffling ~=~ sampling approximation may still make sense in research applications. * Experiments compare the methods you'd expect to see (normalized by sigma and normalized by privacy budget). Weaknesses: There are a few missing pieces that I think would strengthen the paper and hopefully not require too much extra work: 1) Thm 3.3 uses the bound from Prop 3.2, and may not be the tightest possible result. Would be good to quantify the gap between Poisson sampling and Truncated Poisson sampling in terms of the noise multiplier you obtain for each, fixing p for Poisson sampling and B for the Truncated Poisson sampling so that the expected batch sizes of Poisson sampling matches the physical batch sizes of truncated Poisson. 2) There is not enough discussion or analysis on how to configure the Truncated Poisson parameters. For a fixed physical batch size, privacy budget, and other relevant parameters (number of iterations), can you give a simple procedure to find the best sampling probability to use in terms of the expected signal to noise ratio? 3) Do the same findings hold up beyond the 1- and 5-epoch regimes? Would be nice to include a comparison where Epochs is varied on the x-axis and sigma is varied on the y-axis (no need to do new experiments on real data which might be very expensive). Minor: 4) In plots you should evenly space out the independent variables. If plotting in log space, evenly space them in log space (e.g., Batch Size = 1024, 2048, 4096, ..., 262144 and epsilon = 1, 2, 4, 8, ..., 16). Also the legend is a bit small, and shared across plots. Consider making 1-row 5-column legend spanning or otherwise increasing the font size. Technical Quality: 4 Clarity: 3 Questions for Authors: 1) In Fig 2, it seems like P corresponds to truncated Poisson sampling, but green line corresponds to non-truncated Poisson accounting. Why do these lines line up in the middle plots then? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. > quantify the gap between Poisson sampling and Truncated Poisson sampling While it is likely that our analysis for truncated Poisson sampling is not optimal, it is reasonable enough in practice. The loss due to truncation is very minimal to the privacy accounting as we allocate only $0.1 \delta$ for the truncation part, as seen by how the noise multiplier for Poisson subsampling with and without truncation are quite similar (seen in Fig. 2, 3). Note that the trade-off here is only between privacy and computation: larger maximum batch size means accounting is closer to (untruncated) Poisson subsampling, at the cost of increased computation during model training. Thus, the optimal accounting for truncated Poisson subsampling would at best allow us to use a slightly smaller maximum batch size, which would only slightly reduce the computational cost of training. Since this was not central to our work, we did not optimize the analysis to the best possible. Thanks for raising this, and we will consider adding this to the discussion. > find the best sampling probability to use in terms of the expected signal to noise ratio While we do not optimize this theoretically, this was indeed our motivation for considering different batch sizes in Figure 2. > include a comparison where Epochs is varied on the x-axis and sigma is varied on the y-axis We provide $\sigma$ values for different accounting methods in the table below, as well as in a plot in Figure 1 of the attached pdf. Note that the values for the $\cal P$ sampler are upper bounds, whereas the values for $\cal S$ samplers are lower bounds. Again observe that the values for $\cal P$ (truncated Poisson subsampling) are very close to ${\cal S}^{\circlearrowright}$ ($\cal P$ accounting), the latter corresponding to untruncated Poisson subsampling. | Epochs | $\cal P$ | ${\cal S}^\circlearrowright$ ($\cal P$ accounting) | ${\cal S}^\circlearrowright$ | ${\cal S}^{\diamond}$ | $\cal D$ | |---|---|---|---|---|---| | 1 | 0.608 | 0.606 | 1.053 | 1.053 | 1.106 | | 3 | 0.643 | 0.642 | 1.181 | 1.825 | 1.916 | | 5 | 0.665 | 0.663 | 1.227 | 2.356 | 2.474 | | 10 | 0.702 | 0.701 | 1.292 | 3.331 | 3.499 | | 15 | 0.730 | 0.729 | 1.334 | 4.080 | 4.285 | | 20 | 0.755 | 0.753 | 1.366 | 4.711 | 4.948 | Observe that the values of Poisson subsampling with truncation are slightly larger than that without truncation. > evenly space out the independent variables Thanks for the suggestion. We will modify the plots accordingly. > Why do these lines line up in the middle plots As argued above, the additional privacy loss due to truncation of the batch in Poisson subsampling is minimal even with our sub-optimal analysis, and so while the values with and without truncation are not exactly the same, the difference is negligible compared to the scale of the rest of the plot. --- Rebuttal Comment 1.1: Comment: Apologies I am just looking at this carefully now. For the table you shared, those noise multipliers appear to not account for the differing expected batch sizes, is that correct? For P, I would like to see noise multiplier / (expected batch size) For S (with P accounting) I would like to see noise multipliers / (physical batch size) Please update and send back ASAP, I would like to follow-up after seeing those numbers. --- Rebuttal 2: Title: Response to Official Comment by Reviewer 9tRV Comment: We would like to clarify that for the table we shared, the expected batch sizes for Poisson subsampling are the same as the physical batch size for Shuffle and Deterministic; by "${\cal S}^{\circlearrowright}$ (with $\cal P$ accounting)", we mean the same method as ${\cal S}^{\circlearrowright}$, but using the noise multiplier assuming Poisson subsampling (without any truncation). While truncation in Poisson subsampling does reduce the expected batch size by a small amount, this is quite negligible and hence the normalization would be by approximately the same quantity. In particular, we use the values of $\varepsilon = 5$, $\delta = 2.7 \cdot 10^{-8}$ and (expected) batch size $b = 200000$. Please let us know if there are still any further questions. --- Rebuttal Comment 2.1: Comment: I see, I would have thought the plot would be normalized for physical batch size. For Truncated Poisson sampling what is the physical batch size you use then? --- Rebuttal 3: Comment: For truncated Poisson subsampling, we use a maximum batch size as specified below, which includes the dummy examples added to ensure that all batches have the same size. The normalization is still by expected batch size (before truncation), which is $b = 200,000$. | Epochs | 1 | 3 | 5 | 10 | 15 | 20 | |---|---|---|---|---|---|---| | **Max Batch size** | 203288 | 203353 | 203383 | 203423 | 203446 | 203463 | --- Rebuttal Comment 3.1: Comment: I see, so only a ~1.7% "cost" -- perhaps there is not much to be gained by optimizing the parameters. Thanks for clarification, would be good to include a version of this table in the paper or appendix.
Summary: The paper focuses on practical implementations of DP training of ML models at scale in the multi-epoch setting. The contribution is two-fold: the paper gives a rigorous analysis for a practical version of Poisson subsampled DP-SGD where the batch size is upper bounded, and proposes lower $(\varepsilon,\delta)$-bounds for the shuffled Gaussian mechanism. Both of these contributions require a careful theoretical analysis which the paper succeeds in carrying out such that the bounds are sharp: there is no big slack in the approximative Poisson subsampling upper bound and the lower bounds is able to illustrate that the shuffling mechanism generally leads to a worse privacy-utility trade-off than the Poisson subsampled Gaussian mechanism. So the message becomes clear: although the shuffling + disjoint batch training is practical, the practical version the paper proposes will likely lead to a better privacy-utility trade-off. The paper builds upon the paper (Chua et al., ICML 2024) and borrows several results from it. Strengths: - Clear message and a well-written paper on a timely topic. DP training of ML models is becoming more popular and these practical aspects need to be addressed. - Elegant theoretical analysis for the approximative Poisson mechanism and for the lower bounds of the shuffled Gaussian mechanism. Weaknesses: - The paper heavily builds upon the previous work by Chua et al. (ICML 2024), and the novelty is relatively thin, although the paper succeeds in delivering a clear message. - The privacy amplification by iteration analysis - type of analysis is not mentioned. Although the current analyses in that line of work are only applicable to convex problems, they give $(\varepsilon,\delta)$-DP bounds for e.g. noisy cyclic GD (Bok et al., ICML 2024) that lead to models whose privacy-utility trade-off is competitive with the DP-SGD trained models (see e.g. experiments by Bok et al.). Technical Quality: 4 Clarity: 3 Questions for Authors: - You mention that the analysis for the shuffled Gaussian mechanism is an open problem. Why do you think this is an important open problem if the lower bounds for the shuffled Gaussian mechanism already indicate that DP-SGD give much better privacy-utility trade-off? In which scenarios would the upper bounds be useful? - Why do you only consider the Poisson subsampling? Is there something particular in the subsampling without replacement that makes it less suitable to the large scale setting? In that case you would not need to have those approximations that you carry out to limit the batch size. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. > novelty over prior work [Similar to Reviewer vHjL] While it may seem that our privacy analysis used some standard techniques (discretization, post-processing and PLD accounting), it was a priori unclear if such a simple method would be effective at providing strong lower bounds, let alone what such a method (e.g. the choice of discretization sets) would be. We would like to note that the prior work of [Chua et al](https://arxiv.org/abs/2403.17673) explicitly notes in Section 5, that _[their] approach is limited to a “single epoch” mechanism ... Extending [their] approach to multiple epochs will be interesting._ Moreover, while they are primarily focused on theoretical analysis, we also study practical implementations and evaluations on real world problems to verify the empirical feasibility of the proposed methods. > privacy amplification by iteration is not mentioned. Understanding when privacy amplification by iteration applies remains an interesting direction, but as of now there is no evidence that it can provide any improvements in the general (non-convex) setting. In fact, a recent [work](https://arxiv.org/abs/2407.06496) suggests that such an improvement might not be possible for general non-convex losses. We will include a discussion on the same in a revision. > Importance of upper bounds for the shuffled DP-SGD? This would be relevant in a regime where Shuffle-DP-SGD performs better than Poisson subsampling. In this case, we would need an “upper-bound” accounting method to report a correct DP guarantee. But we agree that it is less relevant in regimes where Shuffle DP-SGD has worse model utility. > subsampling without replacement (Similar point raised by Reviewer k7t2) Sampling without replacement (i.e., sampling fixed-sizes batches uniformly and independently at random) can also be implemented using a variant of the massively parallel computation approach we propose. We did not consider it in our evaluation however since the privacy guarantees of that are worse than that of Poisson subsampling. In particular, as noted in this recent work of [Lebeda et al.](https://arxiv.org/abs/2405.20769), the noise scale required for this method is twice that required for Poisson subsampling. We will revise the paper to include a discussion of this alternative approach. --- Rebuttal Comment 1.1: Comment: Thank you for the replies! I will keep my score. Please do add the discussion about subsampling without replacement. Please take into account also the conjecture by [Lebeda et al.](https://arxiv.org/pdf/2405.20769) that subsampling without replacement behaves similarly both for add/remove and substitute neighborhood relations.
Summary: This paper investigates the utility of models trained with DP-SGD based on previous findings on the gap in privacy guarantee between shuffled batch sampling (commonly used in practice) and Poisson subsampling (used in theoretical analysis) for private training. A scalable implementation of Posson subsampling is proposed via truncation and parallelization, provided with lower bounds of the privacy guarantee via multi-epoch ABLQ with shuffling. Strengths: - This paper studies the implementation of subsampling with associated privacy accounting in private training and extends the results of ABLQ for multiple epochs. - A scalable implementation of Possion subsampling is proposed, provided with lower bounds on its privacy guarantee. Weaknesses: - The techniques for extending ABLQ from multi-batch to multi-epoch are straightforward. - It is difficult to distinguish from the existing results of Chua et al. [2024], given much of the material in Sec. 2 and Sec. 3 overlaps and is sometimes verbatim. - Different types of batch sampling with different privacy accounting are evaluated. Still, all of them underestimate the privacy loss and cannot rigorously reflect the model utility under the claimed $(\epsilon, \delta)$. Technical Quality: 2 Clarity: 3 Questions for Authors: - The authors mentioned that the Opacus library supports Possion subsampling but might not be used for training on massive datasets. Have the authors tested the implementation from Opacus on the Criteo dataset? If so, how large is a batch it can support (from 1000 to 200,000)? Is this the major obstacle to be included as a reference for Fig.2 and 3? - Is the target batch size $b$ still used to average the summed noisy gradients for the truncated Poisson Batch sampler? Could the authors elaborate on how the size of truncated batches would affect the tracking of privacy loss, especially in Theorem 3.3? - Line 254 -255, could the authors provide more details on selecting $C_i$ in practice? And what is the computation complexity regarding this part? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See W2,3 and Q1 Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. > techniques for extending to multi-epoch are straightforward. [Similar to Reviewer 1Uvo] While it may seem that our privacy analysis used some standard techniques (discretization, post-processing and PLD accounting), it was a priori unclear if such a simple method would be effective at providing strong lower bounds, let alone what such a method (e.g. the choice of discretization sets) would be. We would like to note that the prior work of [Chua et al](https://arxiv.org/abs/2403.17673) explicitly notes in Section 5, that _[their] approach is limited to a “single epoch” mechanism ... Extending [their] approach to multiple epochs will be interesting._ Moreover, while they are primarily focused on theoretical analysis, we also study practical implementations and evaluations on real world problems to verify the empirical feasibility of the proposed methods. > material overlap in Sec. 2 and 3. Section 2 consists mainly of definitions (some standard and some from [Chua et al](https://arxiv.org/abs/2403.17673)), resulting in the overlap; please note that we have cited them in all relevant places. Nevertheless, thank you for bringing this to our attention and we will keep this in mind as we revise the paper to highlight our contributions better. > Different types of batch sampling with different privacy accounting are evaluated ... all of them underestimate the privacy loss and cannot rigorously reflect the model utility under the claimed $(\epsilon, \delta)$. We would like to clarify a possible misunderstanding here, that for Poisson subsampling based DP-SGD, we use the `dp_accounting` library that _overestimates_ the privacy loss, and for Shuffle based DP-SGD, our method _underestimates_ the privacy loss. So the way to interpret our results is that: whenever Poisson subsampling based DP-SGD out-performs Shuffle-based DP-SGD, this would hold even if we use the optimal accounting for each method. > Have the authors tested the implementation from Opacus on the Criteo dataset? Using the implementation from Opacus requires efficient random access to all the data points (e.g. loading the entire dataset in memory), which is not always feasible depending on the machine and configuration, given that the Criteo dataset takes 49GB of storage. We believe such a comparison does not provide too much scientific value, because for small datasets that can fit in memory, the Opacus solution and our method would perform similarly because they are mathematically equivalent. But in contrast to our methods, to the best of our knowledge, it is not straightforward how to scale Opacus to larger datasets. >Is the target batch size b still used to average the summed noisy gradients for the truncated Poisson Batch sampler? Could the authors elaborate on how the size of truncated batches would affect the tracking of privacy loss, especially in Theorem 3.3? We average the summed noisy gradients using the expected batch size; in any case, this is simply a scaling factor that can be assimilated in the learning rate, hence it is not important whether we normalize by the expected or maximum batch size. There is a privacy-vs-computation tradeoff in the choice of maximum batch size $B$ in the Poisson sampler: Taking $B$ to be large gets us closer to the privacy guarantee of the (untruncated) Poisson sampler, but increases the computation cost, since the batches are now larger (even if they contain dummy values with zero weight). > Line 254 -255, could the authors provide more details on selecting Ci in practice? And what is the computation complexity regarding this part? Indeed, while any choice of $C_i$’s are valid, there is an accuracy-vs-computation trade-off, in that, making $C_1$ smaller, $C_m$ larger and adding large number of intermediate $C_i$’s improves the accuracy of the lower bound at the cost of increased computational complexity. * As we note in the paper, we choose $C_1$ to be small enough, and $C_m$ to be large enough so that $P_{\cal S}(G_0)$ and $P_{\cal S}(G_{m+1})$ to be small enough. In particular, we chose the values to ensure $P_{\cal S}(G_0) + P_{\cal S}(G_{m+1}) \le e^{-40}$. * We chose $C_i$’s to be equally spaced in between $C_1$ and $C_m$ with a gap of $\Delta * \sigma^2$, where $\Delta$ is the desired discretization of the PLD. This is a heuristic choice guided by the intuition that we would like to discretize in buckets such that the privacy loss changes by $\Delta$, and since the privacy loss is approximately given as $\max_t x_t / \sigma^2$, the chosen gap means that this approximate privacy loss varies by $\Delta$ between buckets. These details are present in the implementation provided in the Colab link in the paper. But we will highlight these details more prominently in a revision. Please let us know if there are any further questions we can clarify, and if we have satisfactorily addressed the concerns of the reviewer, we kindly ask the reviewer to revisit the rating. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response and clarification. There are still some issues to be addressed: - W1 The authors highlight the difference between single-epoch and multi-epoch analysis of ABLQ. However, from my point of view, as reviewer k7t2 pointed out, the theoretical contribution of this part is limited. - W3 Thanks for the clarification. I wanted to emphasize that even with the new lower bounds for multi-epoch, the gap between Poisson subsampling and shuffle-based sampling is not addressed, especially the amplification effect in the low $\sigma$ region. - Q1 The statement in lines 47-49 is not supported by empirical results or further investigation, which may be inaccurate and misleading to other researchers/practitioners in the field. As far as I know, Opacus supports (distributed) Poisson sampling without the need to load the entire dataset (e.g., simply by indexing). I suggest the authors do their due diligence and check the implementation at https://opacus.ai/api/_modules/opacus/utils/uniform_sampler.html - Q2 The concern here is on the effect of batch size value used in averaging the sum of noisy gradients for privacy accounting. The batch size essentially reflects the sampling rate. However, truncated Poisson sampling distorts the probability of each sample being included in a batch. Has this been considered? - W2 \& Q3 Thank the authors for the response. I hope the authors will include these discussions in the manuscript. Given the amount of revisions required and the questions that remain, I am inclined to keep my initial assessment. --- Rebuttal 2: Comment: > Opacus supports (distributed) Poisson sampling without the need to load the entire dataset (e.g., simply by indexing). I suggest the authors do their due diligence and check the implementation at https://opacus.ai/api/_modules/opacus/utils/uniform_sampler.html Thanks to the reviewer for the pointer. We apologize that our original text in lines 47-49 could be misinterpreted as “Opacus solution *requires* loading the dataset into the memory”. We meant to say that “loading into memory” is one way to facilitate this feature for small datasets. We will revise the text to make it more precise. To avoid any further misunderstanding between us and the reviewer on this matter, we summarize our view of the Opacus solution here: Opacus Poisson Subsampling works by using a unique identifier to index each example, and sampling the indices. They also provide a distributed sampler where each worker samples from a random subset of indices. This approach relies on a DataLoader that can provide efficient random access --- given an arbitrary index, read the corresponding data point efficiently. If random access is slow, the data pipeline will be slow, even though the (index) sampler is not the bottleneck. For small datasets, this can be easily supported by loading the data into memory. For large datasets that do not fit in the memory, various technical challenges need to be addressed to make it work. For example, many formats for raw data (e.g. `csv`, typically used for tabular or text data) or serialized data (e.g. `tfrecords` files, commonly used by the Tensorflow Datasets Catalog) do _not_ naturally support random indexing; moreover, random access can be much slower than sequential bulk reading in cases such as when the dataset is stored in a distributed file system. Various workarounds do exist depending on the resources and constraints, for example, by storing each example as an individual file in a fast SSD attached to the trainer machine. But those solutions usually need to be tailored to specific hardware and cluster infrastructures. On the contrary, we provide a **generic solution** that works for a large range of scales of dataset sizes and with **minimal assumption / requirement** on the underlying IO infrastructure. We agree that an empirical comparison of different technical solutions could be interesting but it is quite difficult to get right and useful as everyone has different configurations of (distributed) file systems, file format of preference, cluster configuration with communication channels of different properties, etc. Therefore it is quite far from the scope of our paper. Finally, regardless of the underlying technical solution, please note that ours is the _first large-scale_ DP training experiment with Poisson subsampling, comparing its feasibility with other solutions from multiple angles, including accounting, efficiency, and model utility. > truncated Poisson sampling distorts the probability of each sample being included in a batch It is true that due to truncation in Poisson subsampling, the probability that an example lands in a batch is reduced slightly. This is handled in the privacy analysis (Theorem 3.3). Please let us know if there are any further questions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments. In the attached pdf, we plot the noise multiplier $\sigma$ values against the number of epochs for different accounting methods, to answer a comment from Reviewer 9tRV. Pdf: /pdf/ce8be1d12b9f4c7daf799376c70d37c363510a9d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Complete Graphical Criterion for Sequential Covariate Adjustment in Causal Inference
Accept (poster)
Summary: The paper addresses the incompleteness of current sequential covariate adjustment criteria in causal inference. It introduces a sound and complete graphical criterion for sequential covariate adjustment, termed Sequential Adjustment Criterion (SAC), and provides an algorithm for identifying a minimal sequential covariate adjustment set. This work demonstrates the limitations of existing criteria, proposes the SAC, and develops a method to construct and optimize adjustment sets for efficient causal effect estimation. Strengths: The paper demonstrates the limitations of the multi-outcome Sequential Back-Door (mSBD) criterion by presenting examples where mSBD fails to identify causal effects that can be identified using sequential covariate adjustment. The paper presents a new criterion, SAC, which is both sound and complete for sequential covariate adjustment. Develops an algorithm to find the minimal sequential covariate adjustment set, reducing unnecessary computations. Weaknesses: The presentation of the manuscript is narrow and difficult to follow. For example, in Eq. 2, the subscript should be used to distinguish the orders of variables, specially, in the left-hand side of the equation. Definition 3 seems incorrect since the presentation 'Z is said to be a sequential adjustment set relative to (X,Y) in G if…' is difficult to understand. How to determine a sequential adjustment set from G? How to understand ``the causal effect is given as an adjustment''? The conclusion in Definition 4 is not readable, please double-check. What type of causal graph satisfies the proposition 3? Can you provide a type to summarize this kind of causal graph? The paper primarily provides theoretical examples and proofs to demonstrate the effectiveness of SAC. Further, the causal graph should be given and the lack of extensive validation with real-world data sets might be seen as a limitation.Y Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. --- > For example, in Eq. 2, the subscript should be used to distinguish the orders of variables, specially, in the left-hand side of the equation. In Eq. (2) included in Definition 3, $\mathbf{X}$ and $\mathbf{Y}$ are already defined as $(\mathbf{X}_1,\cdots ,\mathbf{X}_m)$ and $(\mathbf{Y}_0,\cdots ,\mathbf{Y}_m)$, respectively. --- > __(1)__ Definition 3 seems incorrect since the presentation 'Z is said to be a sequential adjustment set relative to (X,Y) in G if…' is difficult to understand. __(2)__ How to determine a sequential adjustment set from G? 1. That is a standard and established way to express it. Similar expressions can be found in existing works [1, 2]. 2. Section 4.1 and Section 4.2 address methods for constructing $\mathbf{Z}$, focusing on how to determine the sequential adjustment set. --- > How to understand “the causal effect is given as an adjustment”? It means that the causal effect (left-hand side of Eq. (13)) is expressed as covariate adjustment (right-hand side of Eq. (13)). For further explanation of the adjustment criterion, please refer to lines 73-75 of our paper or the existing works referenced in lines 17-27. --- > The conclusion in Definition 4 is not readable It would be greatly appreciated if you could specify which part you find "not readable," so that we can better address your comment. --- > What type of causal graph satisfies the proposition 3? Can you provide a type to summarize this kind of causal graph? > In lines 142-154, we illustrate such an example immediately following Proposition 3, where mSBD is not satisfied, but the causal effect can still be expressed by SCA. --- > Further, the causal graph should be given and the lack of extensive validation with real-world data sets might be seen as a limitation.Y Our paper falls into the causal effect identification category in which the key task is to express the causal effect as a function of observational distribution using assumptions encoded in a causal graph. Therefore, we disagree that the presence of causal graphs is a limitation of our work. As you noted, our work is theory-oriented, presenting a complete criterion for sequential covariate adjustment and introducing an algorithm for constructing a (minimal) sequential adjustment set. Our focus has been on clearly elucidating the theories and algorithms through appropriate examples. We believe that simulation studies or experimental validations would not enhance our theoretical work. However, to demonstrate the practical benefits of our theories, we will provide implementation code during the revision process. --- <Reference> [1] Pearl, J. (1995). Causal diagrams for empirical research. Biometrika, 82(4):669–710. [2] Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press, New York. 2nd edition, 2009. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: Thank you for the rebuttal. My concerns have been partially addressed. It would be better to add some examples for illustrating Definition 4. I am not confident about whether we can summary the graph that mSBD criterion fail. --- Reply to Comment 1.1.1: Comment: > It would be better to add some examples for illustrating Definition 4. Please note that Definition 4 (Partitioning Operator) has already been applied to all subsequent examples. --- > I am not confident about whether we can summary the graph that mSBD criterion fail. What do you mean by _summary the graph that mSBD criterion fail_ ? In this response, we presume that the reviewer is concerned with the case where the mSBD criterion fails while the proposed SAC succeeds (i.e., a type of causal graph satisfying Proposition 3). First, we recall that SAC implies mSBD criterion as follows: $$ (\mathbf{Y}^{\geq i+1} \perp X_i \mid \mathbf{Z}_i, \mathbf{H}\_{i-1} ){\mathcal{G}^{\mathbf{X},\mathbf{Y}}\_{\operatorname{psbd},i}} \implies (\mathbf{Y}^{\geq i+1} \perp X_i \mid \mathbf{Z}_i, \mathbf{H}\_{i-1} )\_{{\underline{X_i}}\overline{{\mathbf{X}^{\geq i+1}}}}, $$ since (1) $\mathcal{G}^{\mathbf{X},\mathbf{Y}}\_{\operatorname{psbd},i}$ cuts fewer edges than $\mathcal{G}\_{{\underline{X_i}}\overline{{\mathbf{X}^{\geq i+1}}}}$, and (2) removing more edges from $\mathcal{G}^{\mathbf{X},\mathbf{Y}}\_{\operatorname{psbd},i}$ to obtain $\mathcal{G}\_{{\underline{X_i}}\overline{{\mathbf{X}^{\geq i+1}}}}$ doesn’t decrease any d-separations. This means that it’s impossible for $\mathbf{Z}$ to satisfy SAC while violating mSBD. Equivalently, Whenever mSBD holds, SAC will also hold simultaneously. This observation allows us to come up with a scenario in which SAC holds but mSBD fails when there exists $\mathbf{Z}_i$ that is a descendant of $\mathbf{X}_i$. We illustrated this scenario in lines 142-154. Specifically, in Fig. 2a, $\mathbf{Z}_2 := \\{Z_c,Z_d\\}$ does not satisfy the mSBD criterion since it’s a descendant of $X_2$, while it does satisfy SAC.
Summary: The paper investigates the problem of identifying total causal effects via sequential covariate adjustments, which generalizes the standard, well-studied covariate adjustment. Unlike the standard static case, where there exists sound and complete graphical identification criterion, for the sequential counterpart only sound but incomplete criterion is known. The incompleteness is demonstrated in the submitted work. The main achievement of the paper is the sequential (graphical) adjustment criterion which is sound and complete for sequential covariate adjustment that is much more involved, but more powerful than in the standard static adjustment. Strengths: The research of this work is well motivated and concerns an important problem in causality. The task to identify the total causal effects via covariate adjustments in well studied and the authors improve the previous results in this area presenting the first sound and complete graphical criterion for the sequential covariate adjustment which nicely extends the sound and complete criterion for the standard case. To prove the main result, the authors extend in an elegant way the constructive adjustment criterion by (van der Zander et al., 2014) based on the adjustment criterion proposed in (Shpitser et al., 2010). A nice technical result is a construction of sequential adjustment set (in Theorem 3). Weaknesses: The presentation of the paper needs improvement. A number of definitions and formulas require clarification and improving of mathematical precision. For example, in Definition 3 the authors consider H_i and in Eq. (2) they use in the formula h_{j-1}. So, what is the definition of h_{-1} and of h_{0}? Note that according to the definition of H_i, for H_{-1}, the sets X^{(-1)} and Y^{(-1)} are undefined, and for H_{0}, the set X^{(0)} is undefined. It seems that the authors mean, in such cases H_i are empty. Also, some sets Y^{(i)} can be empty. These issues should be clarified. Moreover, it is not clear what is the relationship between the sequential covariate adjustment formula presented in Definition 3 and the corresponding formula in (Jung et al., 2020, Theorem 1) used to identify causal effect by mSBD adjustment. In the submitted paper the authors present in Proposition 2 (mSBD adjustment (Jung et al., 2020)) formula (2) that differs from the original mSBD adjustment formula given in (Jung et al., 2020) in Theorem 1 as Eq. (4). So, to what extent is the incompleteness of mSBD criterion presented in Proposition 3 justified? For more remarks, see paragraph "question" below for details. The authors do not provide experimental results verifying the performance of the proposed algorithms. Technical Quality: 3 Clarity: 2 Questions for Authors: In Eq. (2) x_j and z_j are undefined for j=0. In Eq. (2) z_{j+1} is undefined for j=m. Write explicitly how do you define sets H_i in Definition 5. In the same way as n Definition 3? Why do you consider in Definition 3, 4, etc. in the sequence X = (X_1, . . . ,X_m) the elements X_i as sets? In fact every X_i denotes here a single variable. Eq. (7) is not correct, resp., it does not correspond to the definition given in Eq. (2) in Definition 3 as well as formula used in Eq. (6). Specifically, in the factor P(y2 | x1, x2, z1, z2) in Eq. (7) variable y1 in the conditioning set is missing. Note that in this example we have: Y1 = {Y1}. The formula presented in Eq. (2) for sequential covariate adjustment is different than the formula in (Jung et al., 2020, Theorem 1) used to identify causal effect by mSBD adjustment. Could you show that they are equivalent? The next issue concerns Theorem 3. The authors define there in Eq. (15) the forbidden sets F_i using in the right-hand side the sets H_{i-1}. Again, the sets H_{i} should be defined. E.g. if H_i := X^{(i)} ∪ Y^{(i)} ∪ Z^{(i)} -- as assumed in Definition 3 -- then X ∪ Y ∪ H_{i−1} in (15) is equal to X ∪ Y ∪ Z^{(i)}. So why do you use H_{i−1} in (15)? In Eq. (16) you use union of sets Z_{j}^{an} for j=1,..., i-1. For i=1 this summation makes no sense. In Definition 7: (Sequential Adjustment Criterion) --> (Sequential Adjustment Criterion (SAC)) Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and detailed comments. --- > For example, in Definition 3 the authors consider H_i and in Eq. (2) they use in the formula h_{j-1}. So, what is the definition of h_{-1} and of h_{0}? Note that according to the definition of H_i, for H_{-1}, the sets X^{(-1)} and Y^{(-1)} are undefined, and for H_{0}, the set X^{(0)} is undefined. It seems that the authors mean, in such cases H_i are empty. Also, some sets Y^{(i)} can be empty. These issues should be clarified. Thank you for your feedback. We implicitly defined a set whose indices are outside their original range as an empty set. For example, 1. In Eq. (2), all the subscripted values whose indices are out of range (e.g., $\mathbf{x}\_0$, $\mathbf{z}\_0$ and $\mathbf{z}\_{m+1}$ are considered as an empty set. 2. In Eq. (16), it implies that we go over index j such that $1 \leq j \leq i-1$. Hence, if $i=1$ implies that there is no union, i.e., an empty set will be returned for the term. We will explicitly state this in the revised version of the paper. --- > The formula presented in Eq. (2) for sequential covariate adjustment is different than the formula in (Jung et al., 2020, Theorem 1) used to identify causal effect by mSBD adjustment. Could you show that they are equivalent? Here is the proof of the equivalence, as derived in line 393 in the Appendix: $\underbrace{P_\mathbf{x}(\mathbf{y}) = \sum_\mathbf{z} \prod\_{j=0}^n P(\mathbf{y}_j \mid \mathbf{x}^{(j)},\mathbf{z}^{(j)},\mathbf{y}^{(j-1)})\prod\_{k=1}^n P(\mathbf{z}_k \mid \mathbf{x}^{(k-1)},\mathbf{z}^{(k-1)},\mathbf{y}^{(k-1)})}\_{ \text{Jung et al.,2020, Thm1}}$ $\qquad \quad = \sum\_\mathbf{z} \prod\_{j=0}^n P(\mathbf{y}_j \mid \mathbf{x}_j,\mathbf{z}_j,\mathbf{h}\_{j-1})\prod\_{k=1}^n P(\mathbf{z}_k \mid \mathbf{h}\_{k-1})$ $\qquad \quad =\sum_\mathbf{z} \prod\_{j=0}^n P(\mathbf{y}_j \mid \mathbf{x}_j,\mathbf{z}_j,\mathbf{h}\_{j-1})P(\mathbf{z}\_{j+1} \mid \mathbf{h}_j)$ $\qquad \quad =\underbrace{\sum\_\mathbf{z} \prod\_{j=0}^n P(\mathbf{z}\_{j+1},\mathbf{y}_j \mid \mathbf{h}\_{j-1},\mathbf{x}_j,\mathbf{z}_j,) = P(\mathbf{y} \mid do(\mathbf{x}))}\_{\text{Ours Eq. (2)}}$ --- > Write explicitly how do you define sets H_i in Definition 5. In the same way as n Definition 3? Yes. We will invoke the definition of $\mathbf{H}_i$ in Def. 3 to Def. 5. --- > Why do you consider in Definition 3, 4, etc. in the sequence X = (X_1, . . . ,X_m) the elements X_i as sets? In fact every X_i denotes here a single variable. This is because a set serves as a basic unit for graphical operators like \(Pa\), \(An\), and so on. Furthermore, by treating \(X_i\) as a set, we can maintain consistency in handling \(X_i\), \(\mathbf{Y}_i\), and \(\mathbf{Z}_i\) simultaneously. --- > Eq. (7) is not correct, resp., it does not correspond to the definition given in Eq. (2) in Definition 3 as well as formula used in Eq. (6). Specifically, in the factor P(y2 | x1, x2, z1, z2) in Eq. (7) variable y1 in the conditioning set is missing. Note that in this example we have: Y1 = {Y1}. Thank you for catching these typos. We will make the necessary corrections. --- > then X ∪ Y ∪ H_{i−1} in (15) is equal to X ∪ Y ∪ Z^{(i)}. So why do you use H_{i−1} in (15)? We use $\mathbf{H}\_{i-1}$ in (15) to main comprehensibility. Specifically, Equation (15) is read as follow -- the forbidden set is an union of (1) treatments, (2) outcomes, (3) predecessors ($\mathbf{H}\_{i-1}$), (4) dpcp, and (5) a descendent set. --- > In Definition 7: (Sequential Adjustment Criterion) --> (Sequential Adjustment Criterion (SAC)) We will revise as suggested. Thanks. --- Rebuttal Comment 1.1: Title: Comment Comment: Thanks to the authors for all the answers that address my concerns. --- Reply to Comment 1.1.1: Comment: We are pleased that we have addressed all of your concerns. Thank you for taking the time to offer your valuable and detailed feedback.
Summary: To estimate causal effects given observational data, previous works provide graphical criterion that is not complete in the sequential and multi-outcome cases. The following work extends the complete adjustment criterion to these cases. The non-completeness of previous sequential and multi-outcome criterion is highlighted and then a complete graphical criterion is proposed. The authors then propose an algorithm that finds the minimum adjustment set that satisfies SAC relative to some treatment and outcome in the graph. Strengths: - The paper is clearly written, the problem being solved is clear as well as the limitations of previous work. The examples also help a lot in this. - Despite the low novelty (in weakness) this has not been done before and having access to a complete graphical criterion for sequential adjustment is of practical importance. Weaknesses: - The work does seem to be a simple extension of AC to the sequential case. I'm not sure what the main new insight here is, as the concepts required for SAC seem very simple extensions of AC. The adjustment in AC opens all causal paths and blocks all others, following this logic onto the sequential case seems to imply the criterion given in SAC. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: - For an adjustment set Z where mSBD holds, SAC holds as well (corollary 1) but can SAC provide a smaller adjustment set? - In the example of Figure 3, doesn't SAC collapse to be the same as AC? Minor comments: - Definition of proper causal paths is different from previous papers (eg Shipster 2010), would be good to give intuition here. The definition of proper causal path may not be strictly true. - References for definitions in Section 3? - H_i should be redefined in Definition 5 to make the definition complete, or refer back to where it is defined Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: This is clear throughout the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and valuable feedback! --- > The work does seem to be a simple extension of AC to the sequential case. Our proposed method is an extension of the AC, similar to how the sequential back-door adjustment extends the back-door adjustment. The principle of our method is to open the proper causal paths and block the proper non-causal paths. However, we disagree that the extension procedure is simple. As the saying goes “The devil is in the details”, there are nontrivial challenges when extending the AC to the SAC: 1. Unlike the adjustment case, partitioning the variables is essential. Choosing a proper partition that preserves causal interpretation (e.g., $\mathbf{X}_i$ causes $\mathbf{Y}^{\geq i}$) is nontrivial. 2. There is a significant gap between conjecturing and actually witnessing the soundness and completeness of SAC. Specifically, the formal approach for witnessing the soundness and completeness of AC proposed by [1] is hardly generalizable to SAC cases. The original proof of AC by [1] uses the counterfactual network [2] graph to show the soundness and completeness of AC. Such a proof technique is hardly generalizable for sequential cases, since, by the nature of the counterfactual network, the number of treatment variables to be considered in the proof increases exponentially with the number of treatments. To circumvent such challenges, we devised a new proof technique that can be commonly applied to both AC and SAC. 3. One naive approach is to condition the previous set $\mathbf{H}_{i-1}$, and apply the AC with respect to $(X_i, \mathbf{Y}^{\geq i})$ in the graph where the previous set is conditioned on. However, the graph induced by conditioning the previous set may not be a DAG (with bidirected edges), even if the original causal graph is a DAG. This happens when the previous set contains colliders [3]. Circumventing such challenges while adhering to the principle—to open the proper causal paths and block the proper non-causal paths—is a nontrivial problem requiring formal treatments, as we have done in this paper. --- > For an adjustment set Z where mSBD holds, SAC holds as well (corollary 1) but can SAC provide a smaller adjustment set? Just to clarify, please note that mSBD and SAC determine whether a causal effect can be expressed as Sequential Covariate Adjustment (SCA) when $\mathbf{Z}$ is given, rather than constructing $\mathbf{Z}$ itself. Algorithm 1 (minSCA) is designed to provide a smaller (or equal) adjustment set such that no strict subset of it is a valid adjustment set. For example, consider Figure 4a. Here, $\mathbf{Z} = (\emptyset, \{Z_a, Z_b, Z_c\})$ satisfies the mSBD criterion (and SAC) relative to $((X_1, X_2), (\emptyset, Y))$. However, as discussed in lines 301-308, Algorithm 1 (minSCA) yields $\mathbf{Z}^{min} = (\emptyset, \{Z_a\})$ and this set also satisfies the SAC. This example illustrates that the minSCA procedure can yield a smaller adjustment set. --- > In the example of Figure 3, doesn't SAC collapse to be the same as AC? In the example, the set $\mathbf{Z} = \\{ Z_a,Z_b \\}$ satisfies AC, and $\mathbf{Z} := (\\{Z_a,Z_b\\},\\{\\})$ satisfies SAC. In other words, the set $\mathbf{Z} = \\{ Z_a,Z_b \\}$ is a valid admissible \& sequentially admissible set. However, this observation doesn't hold in general. For example, $\mathbf{Z} := (\\{\\},\\{\\})$ also satisfies the SAC in the same graph. However, the AC is not satisfied, because $\mathbf{Z}$ fails to block the path $Y_2 \to Z_b \to X_2$ in the proper back-door graph $\mathcal{G}_{\text{pbd}}^{\mathbf{X},\mathbf{Y}}$. --- > Definition of proper causal paths is different from previous papers (eg Shipster 2010), would be good to give intuition here. The definition of proper causal path may not be strictly true. Our definition is comparable with the definitions in the previous papers as follow. First, a causal path (equivalently, a _directed path_) is said to be proper if it does not intersect $\mathbf{X}$ except at the end point [1]. Equivalently, the path is _proper_ if only its start node is in $\mathbf{X}$ [4]. These definitions match with our definition in line 78, which states that the path does not contain a node in $\mathbf{X} \setminus \\{ X \\}$. In other words, the path starting from $X$ is not overlapped with $\mathbf{X}$ if and only if the path doesn't contain a node in $\mathbf{X} \setminus \\{ X \\}$. Therefore, our definition is equivalent to the definitions in the previous papers. --- > References for definitions in Section 3? We presume that you are referring to Definition 3 (SCA). We will refer to it just before Definition 3, which we already referenced sequential covariate adjustment in lines 28-29 of the Introduction. --- > H_i should be redefined in Definition 5 to make the definition complete, or refer back to where it is defined Thank you for the suggestion. We first note that $\mathbf{H}_{i-1}$ has been defined in Def. 3. We will invoke this definition in Def. 5. --- <Reference> [1] Shpitser, I., VanderWeele, T., and Robins, J. M. (2010). On the validity of covariate adjustment for estimating causal effects. [2] Shpitser, I. and Pearl, J. (2008). Complete identification methods for the causal hierarchy. *Journal of Machine Learning Research*, 9:1941–1979. [3] Richardson, T., & Spirtes, P. (2002). Ancestral graph Markov models. *The Annals of Statistics*, 30(4), 962-1030. [4] van der Zander, B., Li ́skiewicz, M., and Textor, J. (2014). Constructing separators and adjustment sets in ancestral graphs. In *Proceedings of the UAI 2014 Conference on Causal Inference: Learning and Prediction-Volume 1274*, pages 11–24. --- Rebuttal Comment 1.1: Comment: Thank you again for your detailed comments. The discussion period is nearing its end, with just about a day remaining. Could you please confirm if our rebuttal has adequately addressed your concerns and comments?
Summary: This paper develops a complete and constructive criterion for sequential covariate adjustment. An algorithm is provided to identify a minimal sequential covariate adjustment set, which is efficient by ensuring that no unnecessary vertices are included. Strengths: - The problem considered is important in causal inference. - The discussion of the limitations of existing criterion is insightful, often illustrated with example. - The theoretical results and algorithm appear to be coherent and sound, though I am not familiar with the details of covariate adjustment to verify them and go through the proofs. Weaknesses: - Simulation studies are not provided, though it is fine for a theory-focused paper. - The computational complexity of the algorithm is not discussed. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the computational complexity of the proposed algorithm? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I did not manage to find a clear discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for sharing your thoughts and feedback with us. > Simulation studies are not provided, though it is fine for a theory-focused paper. As you noted, our work is theory-oriented, presenting a complete criterion for sequential covariate adjustment and introducing an algorithm for constructing a (minimal) sequential adjustment set. Our focus has been on clearly elucidating the theories and algorithms through appropriate examples. We believe that simulation studies or experimental validations would not enhance our theoretical work. However, to demonstrate the practical benefits of our theories, we will provide implementation code during the revision process. --- > The computational complexity 1. The proper sequential back-door graph (in Def. 6) can be constructed in $O(|\mathbf{V}|+|\mathbf{E}|)$ time [1]. 2. Provided the partition of $\mathbf{Z}= (\mathbf{Z}_1,\cdots,\mathbf{Z}_m)$, checking if such $\mathbf{Z}$ satisfies the SAC in Def. 7 takes $O(m|\mathbf{V}| + m|\mathbf{E}|)$, since checking each d-separation takes $O(|\mathbf{V}| + |\mathbf{E}|)$. 3. Constructing $\mathbf{Z}^{an}_i$ in Theorem 3 takes $O(|\mathbf{V}| + |\mathbf{E}|)$ since finding the set dpcp, descendent and ancestor sets take $O(|\mathbf{V}| + |\mathbf{E}|)$. 4. As mentioned in line 286, finding closure can be done in $O(|\mathbf{V}| + |\mathbf{E}|)$. 5. Combining all of these analyses, the computational complexity of the proposed method is $O(m|\mathbf{V}| + m|\mathbf{E}|)$. --- <Reference> [1] van der Zander, B., Li ́skiewicz, M., and Textor, J. (2014). Constructing separators and adjustment sets in ancestral graphs. In Proceedings of the UAI 2014 Conference on Causal Inference: Learning and Prediction-Volume 1274, pages 11–24. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I do not have further concerns beyond these, and will retain my score until further discussion with the other reviewers. --- Reply to Comment 1.1.1: Comment: We’re glad to have addressed all the concerns from you and other reviewers, `aPdz` and `VqEB`. Thank you for taking the time to review our work.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
Accept (poster)
Summary: This paper introduces a dual strategy for real-world image restoration: GenIR and DreamClear. GenIR, a novel data curation pipeline, addresses dataset limitations by creating a large-scale dataset of one million high-quality 2K images. DreamClear, a Diffusion Transformer-based model, utilizes generative priors from text-to-image diffusion models and multi-modal large language models for photorealistic restorations. It also incorporates the Mixture of Adaptive Modulator (MoAM) to handle various real-world degradations. Extensive experiments confirm DreamClear's superior performance, demonstrating the effectiveness of this approach. Strengths: 1.The writing and structure of the paper are clear and well-organized, making it easy to follow the authors' arguments and methodologies. 2.The authors' introduction of a privacy-safe dataset curation pipeline is significant. This is especially crucial in the era of large models, where data privacy and security are paramount concerns. 3.The experimental results convincingly demonstrate the potential of the proposed method, highlighting its effectiveness and applicability in real-world image restoration scenarios. Weaknesses: 1.The quality assessment of the newly constructed dataset is lacking, both quantitatively and qualitatively. Given the goal of real-world image restoration, I am concerned about whether the current pipeline effectively advances this objective. The authors need to include a discussion on this matter. 2.In the ablation studies, the authors should provide a detailed discussion on the interaction between the dual branches. The motivation for this design choice requires further explanation. 3.Real-world degradations are complex and intertwined. The authors should compare their approach with SUPIR using more challenging examples to better demonstrate the robustness and applicability of their method. Technical Quality: 3 Clarity: 3 Questions for Authors: See the above weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the relevant limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the valuable and positive comments on our work. We address the questions and clarify the issues accordingly as described below. **Q1: Quality assessment of the dataset** **[Reply]** Thank you for this valuable advice. We calculate the FID value between different datasets and DIV2K. The FID results are as follows: | Flickr2K | DIV8K | LSDIR | OST | Ours (Generated)| | :-------: | :------: | :------: | :------: | :------: | | 69.83 | 72.28 | 60.22 | 110.29 | 72.88| The results indicate that our generated dataset achieves FID values comparable to those of commonly used image restoration datasets. This suggests that our dataset maintains a relatively high image quality. Additionally, we provide some example images from our dataset and LSDIR in the **attached PDF file** (Figure III). We also conduct ablation studies to explore the influence of different training datasets. We retrain DreamClear using different datasets, with the results provided below (evaluated on DIV2K-Val): | Trainset | LPIPS $\downarrow$ | DISTS $\downarrow$ |FID $\downarrow$ |MANIQA $\uparrow$ |MUSIQ $\uparrow$ |CLIPIQA $\uparrow$ | | :------- | :------: | :------: | :------: | :------: | :------: |:------: | Existing IR+Generated | 0.3391 |0.1791|23.24|0.4327|68.93| 0.6978| Generated | 0.3601 |0.1847|27.84|0.4295|69.31| 0.6823| Existing IR | 0.3493 |0.1873|24.10|0.4154|63.24| 0.6435| It shows that using the generated dataset can achieve perceptual metrics (LPIPS, DISTS) similar to those obtained with existing IR datasets, and has obtained a significant advantage in no-reference metrics (MANIQA, MUSIQ, CLIPIQA). We speculate that the gap in FID is mainly due to the consistent distribution between the training and validation sets in the DIV2K dataset. The results indicate that our generated data can effectively improve the quality of restored images compared to existing IR datasets, and combining the two can achieve the best performance. We also provide some real-world visual comparisons in the **attached PDF file**, which demonstrates the effectiveness of our generated data in enhancing visual effects. We'all add the ablations in the final version. **Q2: Discussion on the dual branch design** **[Reply]** Thanks for this insightful suggestion. Table 3 presents the quantitative results, while Figure 7 illustrates the qualitative outcomes of the ablation experiments. Some discussions are illustrated in Appendix A.3. Below, we provide a detailed discussion on the dual-branch design: _Reference Branch:_ The incorporation of reference branch allows the model to focus less on degradation removal and more on enhancing image details through generative priors, ultimately producing more photorealistic images. As shown in Table 3, when the reference branch is removed, all metrics on the low-level and high-level benchmarks deteriorate significantly. Figure 7 also shows that the reference branch can significantly enhance the quality of the image. This indicates that the reference branch plays an important role in DreamClear for improving both image quality and semantic information. _Interaction Modules_: The proposed Mixture-of-Adaptive-Modulator (MoAM) acts as an interaction module between the LQ branch and the reference branch, aiming to enhance the model's robustness to intricate real-world degradations by explicitly guiding it through the introduction of token-wise degradation priors. It obtains the degradation map through cross-attention between the LQ features and reference features, guiding the dynamic fusion of expert knowledge to modulate LQ features. To validate its effectiveness, we replace MoAM with different interaction modules: * AM: As shown in Table 3, when replacing MoAM with AM, all metrics undergo a substantial decrease. Similarly, Figure 7 demonstrates that this replacement leads to more blurred textures in both the bear and the snow. These findings underscore the importance of the degradation prior guidance in MoAM for steering restoration experts. * Corss-Attention / Zero-Linear: In addition to AM, we also tried other feature interaction modules, including cross-attention and zero-linear. Table 3 shows that their performance is inferior to AM across all metrics. Figure 7 shows that when using zero-linear, the bear's texture becomes blurred and there are semantic errors on its back. When using cross-attention, many artifacts appear, and there are also semantic errors on the bear's nose. Therefore, using AM as the interaction module is more suitable for the IR task to achieve better restoration results. We'll add the detailed discussion in the final version. **Q3: More real-world comparisons** **[Reply]** Thanks for this valuable suggestion. First, we reanalyze the data from the user study. Specifically, we focuse on the analysis of 33 real-world samples (w/o GTs) out of the original 100 images. The results are provided as follows: | Method | SinSR | StableSR |DiffBIR |SeeSR |SUPIR |DreamClear | | :-------: | :------: | :------: | :------: | :------: | :------: |:------: | | Vote Percentage | 8.1% | 5.5% | 7.6% | 11.2% | 23.1% | **44.4%** | | Top-1 Ratio | 3.0% | 0.0% | 0.0% | 0.0% | 15.2% | **81.8%** | | Top-2 Ratio | 9.1% | 3.0% | 9.1% | 9.1% | 69.7% | **100.0%** | It shows the superiority of DreamClear in restoring real-world images. Besides, we also provide more real-world comparisons with SUPIR in the **attached PDF file** (Figure II). It demonstrates that our method can achieve clearer details and fewer artifacts when dealing with real-world cases. We'll add more real-world comparisons in our final paper. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I maintain my positive score. --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgment of our work and responses. We appreciate your constructive feedback that has helped refine our research. Please feel free to reach out if you have further queries or need additional clarification on our work.
Summary: The paper introduces a dual strategy to tackle the challenges of image restoration (IR) datasets and the development of high-capacity models for image restoration. This strategy comprises: GenIR: An innovative data curation pipeline designed to bypass the laborious data crawling process, providing a privacy-safe solution for IR dataset construction. The authors generated a large-scale dataset of one million high-quality images. DreamClear: A Diffusion Transformer (DiT)-based image restoration model utilizing generative priors from text-to-image (T2I) diffusion models and the perceptual capabilities of multi-modal large language models (MLLMs) to achieve photorealistic restorations. Strengths: The authors made a significant contribution to creating a large and robust image restoration dataset while providing a privacy-safe solution. The authors demonstrated detailed evaluation with recent SOTAs. The restoration model shows good performance across different evaluation metrics. The model is robust enough to handle various degradations such as deblurring and denoising. Weaknesses: For the created data, the paper claimed they maintained privacy standards to ensure no personal information was embedded in the generated images. However, the author failed to provide clear details on how these were achieved. Hence, it is difficult to establish the robustness and effectiveness of the privacy-preserving measures. Given that the SR model, DreamClear, is an integration of various restoration models and is also built on PixArt, a detailed comparative analysis of the computational complexity between the proposed framework and other existing methods would be beneficial. While DreamClear exhibits good performance in perceptual quality, its performance on traditional metrics like PSNR and SSIM is not as strong. Some grammatical errors are observed on line 96, and incomplete statements on lines 109 and 110 should be corrected. Technical Quality: 3 Clarity: 3 Questions for Authors: Given that a million images were generated, how did the authors verify that none of the images had personal information? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the valuable and positive comments on our work. We address the questions and clarify the issues accordingly as described below. **Q1: About privacy preservation** **[Reply]** Thanks for this valuable advice. To minimize the risk of generating images that contain private information, we have implemented constraints from two aspects: * **Text Prompt Filtering**: As illustrated in the paper (Line 160-162), we levarage Gemini to generate millions of text prompts for the T2I model. Instead of directly using these text prompts, we utilize Gemini to review them and filter out any prompts that contain private information. We set the prompt for Gemini as _"You are an AI language assistant, and you are analyzing a series of text prompts. Your task is to identify whether these text prompts contatin any inappropriate content such as personal privacy violations or NSFW material. Delete any inappropriate text prompts and return the remaining ones in their original format."_ All text prompts will be filtered through Gemini, and ultimately we retained one million text prompts. * **Generated Image Filtering**: As shown in Figure 2, we utilize Gemini as a powerful MLLM to ascertain whether the generated images exhibit blatant semantic errors or personal private content. We set the prompt for Gemini as _"You are an AI visual assistant, and you are analyzing a single image. Your task is to check the image for any anomalies, irregularities, or content that does not align with common sense or normal expectations. Additionally, identify any inappropriate content such as personal privacy violations or NSFW material. If the image does not contain any of the aforementioned issues, it has passed the inspection. Please determine whether this image has passed the inspection (answer yes/no) and provide your reasoning."_ Samples that do not pass the inspection will be eliminated. Compared to directly crawling data from the web, the proposed GenIR pipeline significantly lowers the risk of infringing on individuals' privacy. We believe that the GenIR pipeline presents a promising solution for addressing security and ethical concerns in contemporary research involving large models. We'll add the privacy preservation details in our final paper. We acknowledge that ensuring robust privacy preservation poses significant challenges. In this paper, we conduct an exploration of privacy preservation methods with the assistance of advanced MLLMs. Given that the privacy preservation is not the primary focus of our paper, we anticipate exploring this significant area with the community in the future. Regarding the generation of one million images, we employ the aforementioned methods with the help of MLLMs to minimize the inclusion of personal information as much as possible. We acknowledge that achieving 100% avoidance remains a significant challenge. However, before publicly releasing the dataset, we will screen out all images containing faces and compare them with publicly available face datasets (e.g., CelebA, FFHQ) to remove any images that contain personal information. **Q2: About computational complexity** **[Reply]** Thanks for this valuable suggestion. Please refer to the **global response**. **Q3: About PSNR and SSIM** **[Reply]** Thank you for highlighting this point. For real-world image restoration, when the degradation is severe, it is challenging to pursue highly accurate reconstruction; instead, the focus is more on achieving better visual quality [1,2]. Table 1 shows that both SUPIR and DreamClear perform poorly in terms of PSNR and SSIM. However, despite lacking an advantage in full-reference metrics like PSNR and SSIM, SUPIR and DreamClear can produce excellent visual effects. As mentioned in the paper (Line 244-245), many recent works [1,3,4] also observe this phenomenon and suggest that it is necessary to reconsider the reference values of existing metrics and propose more effective methods to evaluate advanced image restoration methods. Therefore, we conducted a user study in the paper to more comprehensively measure the capabilities of different IR models. We believe that as the field of image quality assessment (IQA) evolves, more suitable metrics will emerge to adequately measure the performance of advanced IR models. We will include further discussion on this topic in our final paper. **Q4: About writing errors** **[Reply]** Thanks for pointing out this. We'll carefully check the whole paper and correct these errors in the final version. **References** [1] Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild. In CVPR, 2024. [2] SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution. In CVPR, 2024. [3] Depicting beyond scores: Advancing image quality assessment through multi-modal language models. In ECCV, 2024. [4] Descriptive Image Quality Assessment in the Wild. arXiv preprint arXiv:2405.18842, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the effort the authors have made to address the concerns I raised. However, the computational complexity of this approach continues to be a significant challenge, particularly when considering the need to train the model on new datasets. The fact that training took 7 days on 32 A100 GPUs raises concerns about its practicality in other real-world scenarios, especially in environments with limited resources. Therefore, I am maintaining my initial rating. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our work and providing thoughtful feedback on our rebuttal. In the era of "large models," many fields are progressing towards the development of foundational models. Creating these foundational models requires substantial data and computational resources. In this paper, we approach from both data and model perspectives, aiming to expand the capability limits of image restoration models. This exploration is vital for understanding the strengths and limitations of large-model-based approaches within the domain of image restoration. We are optimistic that advancements in model distillation and quantization will significantly enhance our approach by reducing model size while maintaining effectiveness. We greatly appreciate your time and insightful feedback throughout the discussion period. Please feel free to reach out if you have further queries or need additional clarification on our work.
Summary: The authors propose GenIR, a privacy-safe automated pipeline that generates a large-scale dataset of one million high-quality images for training of the image restoration (IR) models. Additionally, they introduce DreamClear, a IR model that seamlessly integrates degradation priors into diffusion-based IR models. DreamClear features the novel Mixture of Adaptive Modulator (MoAM), which adapts to diverse real-world degradations. Comprehensive experiments demonstrate its outstanding performance in managing complex real-world situations, marking a substantial progression in IR technology. Strengths: The authors propose an automated data curation pipeline for image restoration and extensive experiments across both low-level and high-level benchmarks demonstrates the DreamClear’s state-of-the-art performance in handling intricate real-world scenarios. Weaknesses: 1. The overall impact of the generated dataset is not that convincing. Why is the training also performed on DIV2K, FLickr2K, LSDIR and Flickr8K. 2. The overall architecture in terms of contribution is too limited where the concept of mixture of experts modified to mixture of adaptive modulators and the text to image diffusion model have been widely already explored in the field of image restoration. 3. The first ablation study is not highlighting the impact of the proposed dataset. 4. It would be interesting to see the results without dual-based prompt learning if time permits? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Does the Eq. 2 and 3 holds only for the x_lq, does not it hold for x_ref? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, the authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the valuable and positive comments on our work. We address the questions and clarify the issues accordingly as described below. **Q1: About the generated dataset** **[Reply]** Thanks for your valuable suggestion. The primary goal of the proposed GenIR is to expand the dataset as much as possible to train a more robust model. As shown in Figure 6, the performance of the generated dataset of equivalent size is inferior to that of the real dataset. Therefore, by combining existing IR datasets with our generated dataset, we aim to achieve optimal model performance. We also conduct ablation studies to explore the influence of different training datasets. We retrain DreamClear using different datasets, with the results provided below (evaluated on DIV2K-Val): | Trainset | LPIPS $\downarrow$ | DISTS $\downarrow$ |FID $\downarrow$ |MANIQA $\uparrow$ |MUSIQ $\uparrow$ |CLIPIQA $\uparrow$ | | :------- | :------: | :------: | :------: | :------: | :------: |:------: | Existing IR+Generated | 0.3391 |0.1791|23.24|0.4327|68.93| 0.6978| Generated | 0.3601 |0.1847|27.84|0.4295|69.31| 0.6823| Existing IR | 0.3493 |0.1873|24.10|0.4154|63.24| 0.6435| It shows that using the generated dataset can achieve perceptual metrics (LPIPS, DISTS) similar to those obtained with existing IR datasets, and has obtained a significant advantage in no-reference metrics (MANIQA, MUSIQ, CLIPIQA). We speculate that the gap in FID is mainly due to the consistent distribution between the training and validation sets in the DIV2K dataset. The results indicate that our generated data can effectively improve the quality of restored images compared to existing IR datasets, and combining the two can achieve the best performance. We also provide some visual comparisons in the **attached PDF file** (Figure IV), which demonstrates the effectiveness of our generated data in enhancing visual effects. We'll add the ablations in the final version. **Q2: About our contribution** **[Reply]** Thanks. In this paper, our technical contributions are mainly divided into two aspects: **GenIR.** We introduce the GenIR pipeline, which provides a privacy-safe and cost-effective method to generate data for image restoration. Ultimately, we obatin a dataset containing one million high-quality images and verify the effectiveness of the generated data for real-world image restoration. **DreamClear.** Recent works based on text-to-image diffusion models indeed demonstrate their superior performance in real-world image restoration. Compared to existing works, the architecture of DreamClear mainly differs in the following three aspects: * We propose a dual-branch structure, which incorporates the reference image, allowing the model to focus less on degradation removal and more on enhancing image details through generative priors, ultimately producing more photorealistic images. * We propose a novel Mixture of Adaptive Modulator (MoAM) to enhance our model’s robustness to intricate real-world degradations by explicitly guiding it through the introduction of token-wise degradation priors. Unlike the commonly used MoE structure in LLM, MoAM dynamically guide different restoration experts through degradation maps to achieve more robust image restoration. * To the best of our knowledge, our work is the first to explore the performance of the diffusion transformer (DiT) architecture in image restoration. We will add the discussion on how our work differs from related works in our final paper. We also hope to get more comments and suggestions from you, such as related works built upon MoE structures in low-level vision, to further improve our paper. **Q3: About the first ablation** **[Reply]** Thanks. The main purpose of the first ablation is to explore whether expanding the scale of the generated data can improve the effects of real-world image restoration. To examine the impact of the proposed dataset, we conduct ablations using different training datasets. Please refer to the **Reply of Q1**. **Q4: About dual-based prompt learning** **[Reply]** Thanks for your valuable advice. We remove the dual-based prompt learning strategy and retrain GenIR. We provide qualitative comparisons of the GenIR-generated images for the ablation study. The results are provided in the **attached PDF file** (Figure V). It shows that dual-based prompt learning strategy can effectively enhance image texture details, making the generated images more suitable for image restoration training. We'll add more ablation results in the final version. **Q5: About Eq. (2) and Eq. (3)** **[Reply]** Thanks. As illustrated in the paper (Line 206-208), $x_{ref}$ is directly fed into AM as the conditional information, while $x_{lq}$ is fed into the mixture of degradation-aware experts structure. Figure 3(c) also depicts this process. Therefore, Eq. (2) and Eq. (3) hold only for $x_{lq}$, but not for $x_{ref}$. We'll give a clear illustration of this process in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I highly appreciate the effort made by the authors to address the weaknesses and questions asked. I have still concern in Q2, can you please elaborate in more detail about how the MoAM module helps in achieving robust image restoration and how does the inclusion of a reference image helps to focus upon less on degradation removal. Considering the general response given by the authors, where they mentioned about the training complexity, it is very hard to see the practical implications of the proposed method in real-world applications, so I will change my rating to borderline reject. --- Reply to Comment 1.1.1: Title: Response to Remaining Concerns (Part 1/2) Comment: Thanks a lot for providing thoughtful feedback on our rebuttal! We address your remaining concerns as described below. **1. About Reference Branch** **[Reply]** As outlined in the paper (Line 187), We employ a lightweight image restoration network (i.e., SwinIR trained with L1 loss) to generate preliminary restored images as reference images. While these images may lack fine details, they are largely free from blur, noise, and JPEG artifacts. Consequently, guided by the reference image, our model can more readily identify degraded content in the low-quality (LQ) image, enabling it to concentrate more effectively on enhancing image details. We also conduct ablations to examine the role of the reference branch. As shown in Table 3, when the reference branch is removed, all metrics on the low-level and high-level benchmarks deteriorate significantly. Figure 7 also shows that the reference branch can significantly enhance the quality of the image. This indicates that the reference branch plays an important role in DreamClear for improving both image quality and semantic information. **2. About MoAM** **[Reply]** The proposed MoAM acts as an interaction module between the LQ branch and the reference branch, aiming to enhance the model's robustness to intricate real-world degradations by explicitly guiding it through the introduction of token-wise degradation priors. It obtains the degradation map through cross-attention between the LQ features and reference features, guiding the dynamic fusion of expert knowledge to modulate LQ features. To validate its effectiveness, we replace MoAM with different interaction modules: * AM: As shown in Table 3, when replacing MoAM with AM, all metrics undergo a substantial decrease. Similarly, Figure 7 demonstrates that this replacement leads to more blurred textures in both the bear and the snow. These findings underscore the importance of the degradation prior guidance in MoAM for steering restoration experts. * Cross-Attention / Zero-Linear: In addition to AM, we also tried other feature interaction modules, including cross-attention and zero-linear. Table 3 shows that their performance is inferior to AM across all metrics. Figure 7 shows that when using zero-linear, the bear's texture becomes blurred and there are semantic errors on its back. When using cross-attention, many artifacts appear, and there are also semantic errors on the bear's nose. Therefore, using AM as the interaction module is more suitable for the IR task to achieve better restoration results. --- Reply to Comment 1.1.2: Title: Response to Remaining Concerns (Part 2/2) Comment: **3. About Real-World Applications** **[Reply]** We additionally provide the training computational cost (GPU days) compared with the recent SOTA SUPIR published in CVPR: | | | | | | | :-------: | :------: | :------: | :------: |:------: | | **Methods** | **Params** | **Training Time** | **GPU Days** |**Inference Time** | | SUPIR | 3865.6M | $\approx$ 10 days using 64 A6000 |> 291 A100| 16.36s| | DreamClear | **2283.7M** | **$\approx$ 7 days using 32 A100** | **224 A100** | **15.84s**| Due to the lack of accurate speed comparisons between the A6000 GPU (48GB) and the A100 GPU (80GB), we treat the A6000 GPU as equivalent to the V100 GPU (32GB) for calculations (though the A6000's computational power is clearly superior to the V100). Following the previously published works [1,2], we convert the V100 days to A100 days by assuming a 2.2× speedup of A100 compared to V100. **The results show that our DreamClear method requires at least 70 fewer A100 GPU days compared to SUPIR.** We acknowledge that DreamClear still requires relatively large training costs in the field of low-level vision. However, as we move further into the "large model" era, various fields are advancing towards the development of foundational models, which inherently require vast amounts of data and computational resources. **In this paper, we approach from both data and model perspectives, aiming to expand the capability limits of image restoration models. This exploration is vital for understanding the strengths and limitations of large-model-based approaches within the domain of image restoration.** We are optimistic that advancements in model distillation and quantization will significantly enhance our approach by reducing model size while maintaining effectiveness. **Additionally, our generated dataset offers significant value for real-world applications.** As discussed in the **reply of Q1**, with the same training time for DreamClear (5 days using 32 A100 GPUs), the model trained on our generated dataset achieves better quantitative and qualitative results compared to those trained on existing IR datasets. Besides, as shown in Figure 6, when training lightweight, deployable models like SwinIR, using our generated dataset also results in significant performance improvements compared with using existing IR datasets. The results are as follows (evaluated on LSDIR-Val): |Trainset|LPIPS $\downarrow$|DISTS $\downarrow$|FID $\downarrow$|MANIQA $\uparrow$|MUSIQ $\uparrow$|CLIPIQA $\uparrow$| |:-------:|:------:|:------:|:------:|:------:|:------:|:------:| |DIV2K+Flickr2K|0.4578|0.2435|51.29|0.3691|63.12|0.5647| |Ours Generated (20K images)|**0.3873**|**0.1951**|**42.13**|**0.4502**|**68.83**|**0.6469**| This further underscores the value of the dataset we constructed in this paper for real-world image restoration applications. --- We hope that our response addresses your concerns sincerely. Looking forward to further communication with you! **References** [1] High-Resolution Image Synthesis with Latent Diffusion Models. In CVPR, 2022. [2] PixArt-$\alpha $: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis. In ICLR, 2024. --- Rebuttal 2: Comment: Thanks for your insightful discussion! We'll add all these discussions in our final paper. The primary motivation for introducing the reference image is to provide guidance using degradation-invariant image features, thereby helping the model achieve more realistic restorations. Therefore, providing the reference image is essential for the proposed DreamClear. As mentioned in previous discussions, both quantitative and qualitative ablation studies verify its effectiveness. For an alternative approach, one possible idea may be to develop a degradation-invariant feature extraction network (e.g., finetune the CLIP image encoder with supervised loss from pairs of LQ/HQ images) that directly extracts clean features to achieve a similar effect. In future work, we intend to explore these potential alternatives to further enhance model performance. Thank you for taking the time to provide your feedback. We hope that we have addressed all of your concerns. If so, we kindly ask if you could reconsider our score. Should you have any further suggestions or require additional improvements, please do not hesitate to let us know. --- Rebuttal Comment 2.1: Comment: Thank you for your efforts. Though I am stil worried about the training complexity of the proposed model, but considering the efforts of the authors I will upgrade my rating to borderline accept. --- Reply to Comment 2.1.1: Comment: Thank you for your discussion! We appreciate your constructive feedback that has helped refine our research. We'll carefully revise our final paper. Your positive rating means a lot to us.
Summary: The paper introduces "DreamClear," a high-capacity image restoration model, and "GenIR," an innovative data curation pipeline for image restoration (IR) tasks. DreamClear leverages Diffusion Transformer (DiT) models and generative priors from text-to-image (T2I) diffusion models, combined with multi-modal large language models (MLLMs), to achieve photorealistic restorations. GenIR is a dual-prompt learning pipeline that generates a large-scale dataset of high-quality images while ensuring privacy and copyright compliance. The combination of these strategies enhances the model's ability to handle diverse real-world degradations. Strengths: 1. GenIR provides a novel, privacy-safe method to create large-scale datasets, overcoming the limitations of existing datasets that are often small and not comprehensive. 2. DreamClear integrates generative priors from T2I diffusion models and employs a Mixture of Adaptive Modulator (MoAM) to handle diverse degradation scenarios effectively. 3. The model achieves high perceptual quality in restored images, compared with other methods in various benchmarks. Weaknesses: 1. Complexity in Implementation: The dual-branch structure and the integration of multiple experts for degradation handling add complexity to the model. 2. Some Technical details and more experiments should be provided. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Some technical details are not clear. For example, what kind of loss do you use in this paper? While the paper outlines the steps of the GenIR pipeline, it lacks specific details on parameter settings, such as the hyperparameters used in the generative models, data augmentation methods during training, and strategies for generating negative samples. There is insufficient detail on the criteria and standards used to evaluate and select the generated dataset. 2. One potential issue with the DreamClear approach is the risk of generating artificial textures during the image restoration process. Taking Figure 4 as an example, although the result is clearer, it may produce artifacts because the beak of this bird does not look like this. 3. In Figure 6, as data size increases, performance improves. Is there further improvement for more datasets, e.g., increasing the number of training images to 200000? 4. It would be better to compare the efficiency of different methods, e.g., model size, training/inference time, FLOPs. 5. Some necessary related work should be discussed. There are many diffusion-based image restoration methods, e.g., DDRM, DDNM, DeqIR, etc. It would be better to cite and discuss them. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to the details above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the valuable and positive comments on our work. We address the questions and clarify the issues accordingly as described below. **Q1: About Model Implementation** **[Reply]** Thanks. We explain the motivation behind our model design as follows: * Dual Branch: Our dual-branch structure incorporates the reference image, allowing the model to focus less on degradation removal and more on enhancing image details through generative priors, ultimately producing more photorealistic images. Considering potential detail loss in the reference image, we use both the LQ image and the reference image to jointly guide the diffusion model in order to obtain a realistic image that remains faithful to the original. * Multiple Experts: The proposed Mixture-of-Adaptive-Modulator (MoAM) aims to enhance the model's robustness to intricate real-world degradations by explicitly guiding it through the introduction of token-wise degradation priors. Specifically, we feed the token-wise degradation map into a router network to dynamically select different restoration experts for effective feature fusion. MoAM dynamically fuses expert knowledge, leveraging degradation priors to tackle complex degradations. Table 3 and Figure 7 provide quantitative and qualitative ablations, respectively, to demonstrate the effectiveness of our model design. Besides, we also provide the efficiency comparison with SOTAs (Please refer to the **global response**). The efficiency of DreamClear is comparable to SeeSR and SUPIR. Therefore, we believe that our design is effective and not redundant. **Q2: About technical details** **[Reply]** Thanks for pointing out this. We provide **more training details** in the **global response**. To **generate negative samples**, we adopt SDEdit [1] as the image editing technique. Specifically, we use text prompts such as _"cartoon, painting, blur, messy, low quality, deformation, low resolution, over-smooth, dirty"_ to edit the positive samples. We set the strength in SDEdit to $0.6$, resulting in the corresponding negative samples. During the fine-tuning phase, the number of negative samples is controlled to be one-tenth of the number of positive samples. To **evaluate & select appropriate samples** for our dataset, we first employ a quality classifier to screen the quality of the generated samples. Then we utilize Gemini as a powerful MLLM to ascertain whether the images exhibit blatant semantic errors or inappropriate content. The implementation details are as follows: * We use a convolutional neural network (CNN) as a binary classifier, trained with an equal number of positive and negative samples. Specifically, the negative samples are obtained through SDEdit with the strength set to $0.25$. Samples with a classification probability of less than $0.8$ are filtered out. * The prompt for Gemini is set to _"You are an AI visual assistant, and you are analyzing a single image. Your task is to check the image for any anomalies, irregularities, or content that does not align with common sense or normal expectations. Additionally, identify any inappropriate content such as personal privacy violations or NSFW material. If the image does not contain any of the aforementioned issues, it has passed the inspection. Please determine whether this image has passed the inspection (answer yes/no) and provide your reasoning."_ Samples that do not pass the inspection will be eliminated. * Only the samples that have passed these two rounds of screening will be retained. We will add all these technical details in the final version. **Q3: About artificial textures** **[Reply]** Thank you for your valuable feedback. Real-world image restoration is an ill-posed problem, meaning there isn't a single definitive solution. Our diffusion-based approach aims to generate visually pleasing restored images in most cases. However, when the degradation of the input image is very severe, it becomes challenging to ensure that certain details in the restored image are faithful to the input. Regarding the bird (gyrfalcon) in Figure 4, the benchmark does not include a corresponding GT image. Therefore, we sourced photos of gyrfalcons from the internet to compare the results of different methods. These comparison results are provided in the **attached PDF file** (Figure I). Despite some differences between our restored image and the real gyrfalcon, our approach significantly surpasses SeeSR and SUPIR in terms of clarity and semantics perseveration. Besides, we also provide more real-world comparisons with SUPIR in the **attached PDF file** (Figure II). It demonstrates that our method can achieve clearer details and fewer artifacts when dealing with real-world cases. We'll add more real-world comparisons in our final paper. **Q4: About the increasing data size** **[Reply]** Thanks for the advice, after increasing the data size, the results are provided as follows: |Image Number|LPIPS $\downarrow$|DISTS $\downarrow$|FID $\downarrow$|MANIQA $\uparrow$|MUSIQ $\uparrow$|CLIPIQA $\uparrow$| |:-------:|:------:|:------:|:------:|:------:|:------:|:------:| |100000|0.3902|0.1982|43.27|0.4413|68.27|0.6382| |200000|0.3873|0.1951|42.13|0.4502|68.83|0.6469| After increasing the data size, the performance of the model is improved. We'll add more results using differenet data sizes in our final paper. **Q5: About the efficiency comparison** **[Reply]** Thanks for this valuable suggestion. Please refer to the **global response**. **Q6: About the related work** **[Reply]** Thanks for your reminder. DDRM, DDNM, and DeqIR are all excellent training-free methods that handle various image restoration tasks by improving the sampling process of pre-trained diffusion models. We'll cite all these related diffusion-based works and discuss them in our final paper. **References** [1] Sdedit: Guided image synthesis and editing with stochastic differential equations. In ICLR, 2022. --- Rebuttal 2: Comment: Dear Reviewer CpjF, Thank you very much for recognizing our work: "GenIR provides a novel method" and "DreamClear achieves high perceptual quality". We greatly appreciate the time and effort you dedicated to reviewing our paper. As the deadline for the discussion period approaches and we have yet to receive your feedback, we kindly request that you share any remaining concerns. Please let us know if we can provide any additional information or clarification. Thank you once again for your contributions to the development of our paper. Authors of Submission 8300
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers for their valuable and positive comments on our work. This is a **global response** to reviewers' questions. **1. Efficiency Comparison** We provide the efficiency comparison results as follows: | | | | | | :-------: | :------: | :------: | :------: | | **Methods** | **Params** | **Training Time** | **Inference Time** | | Real-ESRGAN | 16.7M | N/A | 0.08s | | ResShift | 173.9M | N/A | 1.11s| | SinSR | 173.9M | N/A | 0.27s| | StableSR | 1409.1M | $\approx$ 7 days using 8 V100 | 12.36s| | DiffBIR | 1716.7M | N/A | 3.45s| | SeeSR | 2283.7M | N/A | 4.65s| | SUPIR | 3865.6M | $\approx$ 10 days using 64 A6000 | 16.36s| | DreamClear | 2283.7M | $\approx$ 7 days using 32 A100 | 15.84s| The inference speed are tested on a single NVIDIA A100 GPU to generate $512 \times 512$ images from $128 \times 128$ inputs. StableSR, DiffBIR, SeeSR, SUPIR, and DreamClear are all built upon the pre-trained T2I model, resulting in a larger number of parameters. When compared to two recent SOTAs, SeeSR and SUPIR, DreamClear has a similar number of parameters to SeeSR but approximately 1600M fewer parameters than SUPIR. Due to their foundation on MLLM, both DreamClear and SUPIR exhibit slower inference speed than other methods, with DreamClear being about 0.5 seconds faster than SUPIR. Additionally, SUPIR and DreamClear require more training time and computational resources than other methods due to their use of larger training datasets. Nonetheless, they achieve superior visual results compared to other methods. We'll add the efficiency comparison results in the final version. **2. More Training Details** For training GenIR and DreamClear, we both use latent diffusion loss [1], which can be formulated as $$\mathcal{L} _ {\text{LDM}}=\mathbb{E} _ {z_0,c,t,\epsilon}[|| \epsilon-\epsilon_\theta(\sqrt{\bar{\alpha}_t}z_0+\sqrt[]{1-\bar{\alpha }_t}\epsilon ,c,t)||^2_2],$$ where $\epsilon \in \mathcal{N}(0,\mathbf{{I}} )$ is the ground truth noise map at time step $t$. $c$ represents the conditional information. $\bar{\alpha}_t$ is the diffusion coefficient. We provide detiled training hyperparameters as follows: |||| |:-------|:------:|:------:| |**Configuration**|**GenIR**|**DreamClear**| |Optimizer|Adafactor|AdamW| |Optimizer hyper-parameters|$eps_1=10^{-30},eps_2=0.001,decay=-0.8$|$\beta_1=0.9,\beta_2=0.999,eps=10^{-10}$| |Peak learning rate|$4\times10^{-7}$|$5\times10^{-5}$| |Learning rate schedule|constant|constant| |Warmup steps|2000|0| |Gradient clip|1.0|1.0| |Global Batch size|256|256| |Numerical precision|$\text{bfloat16}$|$\text{fp16}$| |Computational Resources|16 A100 (80GB)|32 A100 (80GB)| |Training Time|$\approx$ 5 days|$\approx$ 7 days| |Data Augmentation|random crop, flip|random crop| **3. More Figures** We provide some figures for rebuttal in the **attached PDF file**. **References** [1] High-Resolution Image Synthesis with Latent Diffusion Models. In CVPR, 2022. Pdf: /pdf/a7fe22573db2e02af0e3f4f9c2dbe0160eb1f62b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transfer Learning for Diffusion Models
Accept (poster)
Summary: This paper introduces a new framework, the Transfer Guided Diffusion Process (TGDP), for transferring a pre-trained diffusion model from the source domain to the target domain. They connect the score function for the target domain and the score function of the source domain with a guidance term related to the density ratio. They use a classifier to estimate the density ratio and extend TGDP to a conditional version. Besides, Cycle Regularization term and Consistency Regularization term are proposed to improve the performance. Strengths: 1. This paper introduce a novel training paradigm for transferring a pre-trained diffusion model from the source domain to the target domain in a more efficient and effective way. The proposed guidance term proposed in this paper is a technique worthy of reference and further in-depth study by researchers. 2. The paper is well-written and easy to understand. The theoretical analysis of the robustness is interesting and well described, which helps to understand the proposed techniques. 3. Experimental results show the effectiveness of the proposed method. Weaknesses: 1. The part of Extension to the Conditional Version could be more detailed. The authors could discuss what is the Lemma 3.2 in the conditional version. 2. This paper is missing a comparison with articles related to transfer learning for diffusion models. In experiment part, the authors only compare the TGDP with vanilla diffusion model and finetune generator. 3. Except Gaussian mixture simulations and benchmark electrocardiogram (ECG) data, the authors could provide more experiment results on other datasets. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Is there a writing mistake in Line.162? It is hard to sample from p(x0|xt) rather than q(x0|xt)? 2. What is the Lamma 3.2 in the conditional version? 3. Do you find out other articles related to transfer learning for diffusion models and can you compare other methods with TGDP? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: A limitation of this study is the lack of empirical validation regarding TGDP’s performance on language vision tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. We appreciate the time you spent on the paper. Below, we address your concerns and comments. **Q**: *The authors could discuss what is the Lemma 3.2 in the conditional version. What is the Lamma 3.2 in the conditional version?* **A**: Thank you very much for this kind reminder. Yes, the key idea behind Lemma 3.2 is that the conditional expectation is the optimal solution to the least regression problem. It can be easily extended to conditional generation, i.e., $h _{\psi^*}\left(\mathbf{x}_t, y, t\right) = \mathbb{E} _{p\left(\mathbf{x}_0 \mid \mathbf{x}_t, y\right)}\left[q\left(\mathbf{x}_0, y\right)/p\left(\mathbf{x}_0, y\right)\right]$. We give the Lemma and proof here for the sake of completeness, and will add them to the revised version. Lemma: For a neural network $h _{\boldsymbol{\psi}}\left(\mathbf{x}_t, y, t\right)$ parameterized by $\boldsymbol{\psi}$, define the objective $\mathcal{L} _{\text{guidance}}(\boldsymbol{\psi}) :=\mathbb{E} _{p(\mathbf{x}_0, \mathbf{x}_t, y)}\left[\left\|h _{\boldsymbol{\psi}}\left(\mathbf{x}_t, y, t\right)-\frac{q(\mathbf{x}_0, y)}{p(\mathbf{x}_0, y)}\right\|_2^2\right],$ then its minimizer $\boldsymbol{\psi}^* = \underset{\boldsymbol{\psi}}{\arg \min } \ \mathcal{L} _{\text{guidance}}(\boldsymbol{\psi})$ satisfies: $h _{\boldsymbol{\psi}^*}\left(\mathbf{x}_t, y, t\right)=\mathbb{E} _{p(\mathbf{x}_0 |\mathbf{x}_t, y)}\left[{q(\mathbf{x}_0, y)}/{p(\mathbf{x}_0, y)}\right].$ The proof is straightforward and very similar to the unconditional version. Note that the objective function can be rewritten as $$ \begin{aligned} \mathcal{L} _{\text{guidance}}(\boldsymbol{\psi}) :&= \mathbb{E} _{p(\mathbf{x}_0, \mathbf{x}_t, y)}\left[\left\|h _{\boldsymbol{\psi}}\left(\mathbf{x}_t, y, t\right)-\frac{q(\mathbf{x}_0, y)}{p(\mathbf{x}_0, y)}\right\|_2^2\right] \\\\ & = \int _{\mathbf{x}_t} \int _{y} \{\int _{\mathbf{x}_0}p(\mathbf{x}_0|\mathbf{x}_t,y) \left\|h _{\boldsymbol{\psi}}\left(\mathbf{x}_t, y, t\right) - \frac{q(\mathbf{x}_0, y)}{p(\mathbf{x}_0, y)}\right\|_2^2 d\mathbf{x}_0 \} p(\mathbf{x}_t|y) p(y) dy d\mathbf{x}_t \\\\ & = \int _{\mathbf{x}_t} \int _{y} \{ \left\|h _{\boldsymbol{\psi}}(\mathbf{x}_t, y, t)\right\|_2^2 - 2 \langle h _{\boldsymbol{\psi}}(\mathbf{x}_t, y, t), \int _{\mathbf{x}_0}p(\mathbf{x}_0|\mathbf{x}_t, y) \frac{q(\mathbf{x}_0,y)}{p(\mathbf{x}_0,y)} d\mathbf{x}_0 \rangle \} p(\mathbf{x}_t|y) p(y) dyd\mathbf{x}_t + C \\\\ & = \int _{\mathbf{x} _t} \int _{y} \left\|h _{\boldsymbol{\psi}}(\mathbf{x}_t, y, t) - \mathbb{E} _{p(\mathbf{x}_0 |\mathbf{x}_t, y)}\left[\frac{q(\mathbf{x}_0, y)}{p(\mathbf{x}_0, y)}\right] \right\|_2^2 p(\mathbf{x}_t|y) p(y) dy d\mathbf{x}_t, \end{aligned}$$ where $C$ is a constant independent of $\boldsymbol{\psi}$. Thus we have the minimizer $\boldsymbol{\psi}^* = \underset{\boldsymbol{\psi}}{\arg \min } \ \mathcal{L} _{\text{guidance}}(\boldsymbol{\psi})$ satisfies $h _{\boldsymbol{\psi}^*}\left(\mathbf{x}_t, y, t\right)=\mathbb{E} _{p(\mathbf{x}_0|\mathbf{x}_t, y)}\left[{q(\mathbf{x}_0, y)}/{p(\mathbf{x}_0, y)}\right]$. **Q**: *This paper is missing a comparison with articles related to transfer learning for diffusion models. Do you find out other articles related to transfer learning for diffusion models and can you compare other methods with TGDP?* **A**: Thank you very much for this question. As far as we know, [1,2,3] explores approaches to fine-tuning diffusion models. They focus on methods that either significantly reduce the number of tunable parameters or introduce regularization terms to alleviate the overfitting for image data. Since fine-tuning only a subset of weights in diffusion models often yields results that are worse or comparable with methods that finetune all weights, we compared our method to full weights fine-tuning, which we believe serves as a strong baseline. [1] Moon, T., Choi, M., Lee, G., Ha, J., Lee, J., Kaist, A. Fine-tuning Diffusion Models with Limited Data. NeurIPS 2022 Workshop on Score-Based Methods. [2] Xie, E., Yao, L., Shi, H., Liu, Z., Zhou, D., Liu, Z., Li, J., Li, Z. DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 4207-4216. [3] Zhu, J., Ma, H., Chen, J., Yuan, J. (2023). DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image Generation using Limited Data. ArXiv, abs/2306.14153. **Q**: *Except Gaussian mixture simulations and benchmark electrocardiogram (ECG) data, the authors could provide more experiment results on other datasets.* **A**: Thank you very much for this question. We refer to the "general" response and Table 1 in the attached file. **Q**: *Typo in line 162.* **A**: Thank you very much for your kind reminder. We have corrected it in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I will keep my original rating.
Summary: This paper addresses the transfer learning problem for diffusion models, specifically adapting pre-trained diffusion models to downstream datasets, particularly when the data size is small. Traditional parameter-efficient fine-tuning methods often use pre-trained models as parameter initialization, selectively updating parameters based on prior knowledge. These methods, however, can be suboptimal and not robust across different downstream datasets. The authors propose a novel approach called Transfer Guided Diffusion Process (TGDP), which aims to transfer pre-trained diffusion models more effectively. Instead of fine-tuning, TGDP treats the knowledge transfer process as guided by the pre-trained diffusion model. This method involves learning a domain classifier and using its gradient to guide the estimated score function from the source to the target domain, complemented by additional regularizations in practical implementation. Experimental results demonstrate that TGDP outperforms traditional fine-tuning methods, achieving state-of-the-art performance on both synthetic and real-world datasets. Strengths: 1. Provides a novel perspective on knowledge transfer for pre-trained diffusion models. 2. Theoretical foundation of the proposed method is well-constructed. 3. The paper is well-organized with clear presentation. Weaknesses: 1. **Scope of Title:** - The title "Transfer Learning" is too broad. The paper focuses solely on (few-shot and supervised) domain adaptation, where the upstream and downstream tasks are similar in nature. General transfer learning encompasses a wider range of tasks, including those with different downstream tasks from the upstream one. For instance, transferring pre-trained text-to-image models to controllable generation or text-to-video generation are broader and more complex tasks that fall under transfer learning. - The paper does not address transfer learning across different label spaces. Even within the same generation tasks, domain adaptation requires identical label spaces for source and target domains, which is not always practical. For example, transferring pre-trained conditional generation models on ImageNet to other fine-grained datasets with different label spaces presents a more challenging problem than the domain adaptation addressed in this paper. 2. **Empirical Results:** - The benchmarks used are insufficient. The authors conduct experiments primarily on synthetic datasets and a single real-world dataset (ETG). This limited scope is inadequate for demonstrating the method's effectiveness. More experiments on various modalities and datasets, such as the DomainNet dataset, which is a benchmark for domain adaptation, are necessary to showcase the generalization ability of the proposed method. - The provided analyses are insufficient. While several ablation studies on simulations are included in Appendix C.2, they are not comprehensive. Essential missing analyses include (1) examining the main term $\mathcal{L}\_{\text{guidance}}$ alone and (2) combining $\mathcal{L}\_{\text{guidance}}$ with $\mathcal{L}\_{\text{cycle}}$. These studies are crucial for understanding each term's effectiveness. Additionally, ablation studies should also be conducted on real-world datasets, not just synthetic ones. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. **Consistency Regularization:** - The consistency regularization requires optimization of the gradient term $\nabla_{x_t}\log h_\psi(x_t,t)$, but the paper lacks details on how to optimize this term, especially when second-order gradients are involved. Could the authors elaborate on the optimization process for this term? 2. **Effectiveness of TGDP:** - In Table 1, although TGDP significantly outperforms baselines, its performance is highly dependent on the target data size $n$. This seems contradictory to two points mentioned by the authors: (a) the training of the domain classifier $c$ is not significantly affected by $n$ (Figure 2), and (b) the training of the guidance network $h$ with the main term $\mathcal{L}_{\text{guidance}}$ does not require target data (Equation 8). How do the authors explain this discrepancy? 3. **Explanation of Figure 3:** - Figure 3 shows that the visualizations of *Finetune Generator* and *TGDP* appear similar, yet their performance differs significantly. Could the authors provide more insights and explanations about the figure and the reasons behind this performance difference? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. We appreciate the time you spent on the paper. Below we address the concerns and comments that you have provided. **Q**: *The title "Transfer Learning" is too broad. The paper focuses solely on (few-shot and supervised) domain adaptation, where the upstream and downstream tasks are similar in nature. General transfer learning encompasses a wider range of tasks, including those with different downstream tasks from the upstream one. For instance, transferring pre-trained text-to-image models to controllable generation or text-to-video generation are broader and more complex tasks that fall under transfer learning.* **A**: Thank you very much for this question. Actually, our proposed framework is general enough to deal with more complex tasks, e.g., fine-tuning text-to-image models or text-to-video generation. These models can be viewed as the conditional generative model with text embedding as the condition. **Q**: *The paper does not address transfer learning across different label spaces. Even within the same generation tasks, domain adaptation requires identical label spaces for source and target domains, which is not always practical. For example, transferring pre-trained conditional generation models on ImageNet to other fine-grained datasets with different label spaces presents a more challenging problem than the domain adaptation addressed in this paper.* **A**: Thank you very much for this question. Our framework (conditional guidance) does not assume identical label spaces for source and target domains. In the ECG experiments, the label set of target domain is the subset of the source domain. When the source and target domain contain different class labels, our framework is still applicable, i.e., when $y_t\neq y_s$, $$\underbrace{\mathbf{s} _{\boldsymbol{\phi}^*}(\mathbf{x} _t, y _{t}, t)} _{\substack{\text {target source}}} = \underbrace{\nabla _{\mathbf{x} _t} \log p_t(\mathbf{x}_t | y_s)} _{\text {pre-trained conditional model on source}}+ \underbrace{\nabla _{\mathbf{x}_t} \log \mathbb{E} _{p(\mathbf{x}_0|\mathbf{x}_t, y_s)}\left[\frac{q(\mathbf{x}_0, y_t)}{p(\mathbf{x}_0, y_s)}\right]} _{\text {conditional guidance}}.$$ To generate an unseen class $y_t$, the key problem here is to choose a particular class from the source domain $y_s$ such that we can borrow more information from the source domain to generate this unseen class from the target domain. The coupling between $y_t$ and $y_s$ can be learned by solving a static optimal transport problem. We show our framework is general enough to deal with homogeneous transfer learning together with heterogeneous transfer learning. More in-depth design (e.g. coupling solved by static optimal transport) can be left to future work. **Q**: *More experiments on various modalities and datasets are necessary to showcase the generalization ability of the proposed method.* **A**: Thank you very much for this suggestion. We refer to the "general" response and Table 1 in the attached file. **Q**: *While several ablation studies on simulations are included in Appendix C.2, they are not comprehensive.* **A**: Thank you very much for this suggestion. We refer to the “global” response and Figure 1 in the attached file. **Q**: *The consistency regularization requires optimization of the gradient term $\nabla _{\mathbf{x}_t} \log h _{\boldsymbol{\psi}}\left(\mathbf{x}_t, t\right)$. Could the authors elaborate on the optimization process for this term?* **A**: Thank you very much for this question. Optimization of the gradient term is widely used by the meta-learning method. There is only one-line difference in practice to implement it, i.e., "torch.autograd.grad($\log h_{\boldsymbol{\psi}}\left(\mathbf{x}_t, t\right)$, $\mathbf{x}_t$, retain\_graph=True)" to retain the computational graph for the backpropagation of the weights of guidance network $\boldsymbol{\psi}$, which can be found at line 249 in density\_ratio\_guidance.py. We will add additional clarifications to the revised version. **Q**: *Effectiveness of TGDP.* **A**: Thank you very much for this question. Both of the two claims are correct. (The training of the domain classifier is not significantly affected by the number of samples in the target domain since the domain classifier can achieve more than 90\% accuracy learning from only 10 samples in the target domain. And the training of the guidance network with $\mathcal{L}_{\text{guidance}}$ does not require target data.) The key problems are: 1) The density ratio estimator may not be accurate in some regions since the coverage will not be large enough when the number of training samples is 10; 2) The $\mathcal{L} _{\text{cycle}}$ and $\mathcal{L} _{\text{consistence}}$ depends on the number of samples in the target domain. **Q**: *Explanation of Figure 3.* **A**: Thank you very much for your question. From Table 2, we demonstrate improvements in the diversity and fidelity of TGDP against two baseline methods. For each sub-figure in Fig 3, we get the embedding for T-SNE by using the same encoder from learning directly on target samples. If we change the encoder to the learned classifier by each method, we may see a big difference between Finetune Generator and TGDP. --- Rebuttal 2: Comment: Thank you for your response. However, some of my concerns still remainsunsolved. - **Q1: The title "Transfer Learning" is too broad.** One typical advantage of fine-tuning and other transfer learning methods (compared with score shaping) is to adapt to a new downstream task. For example, stable-diffusion is a *text-to-image generation* model, and [1-2] transfer it to *controllable generation* and *text-to-video generation*, respectively. Could the authors provide further details about how TGDP handles these tasks? - **Q7: Explanation of Figure 3.** Why does using the same encoder learn on target samples lead to similar T-SNE visualizations, although the diversity and fidelity are improved? Are there any results following "change the encoder to the learned classifier"? [1] Adding Conditional Control to Text-to-Image Diffusion Models, ICCV 2023 \ [2] Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models, CVPR 2023 --- Rebuttal Comment 2.1: Comment: We thank the reviewer for the comments and suggestions. We appreciate the time you spent on the paper. Below we address the concerns and comments that you have provided. Q1: The title "Transfer Learning" is too broad. One typical advantage of fine-tuning and other transfer learning methods (compared with score shaping) is to adapt to a new downstream task. For example, stable-diffusion is a text-to-image generation model, and [1-2] transfer it to controllable generation and text-to-video generation, respectively. Could the authors provide further details about how TGDP handles these tasks? Thank you very much for your question. Given a source distribution $(x, \boldsymbol{c}_t) \sim p$ ($\boldsymbol{c}_t$ denotes text prompts by using the terminology from [1]), a pre-trained diffusion model can be trained on the source distribution. Given a target distribution $(x, \boldsymbol{c} _t, \boldsymbol{c} _{\mathrm{f}}) \sim q $ ($\boldsymbol{c} _t $ denotes a task-specific condition), [1,2] can be fine-tuned by noise matching objective, $$ \mathcal{L}= \mathbb{E} _{\boldsymbol{x}_0, \boldsymbol{t}, \boldsymbol{c}_t, \boldsymbol{c} _{\mathrm{f}}, \epsilon \sim \mathcal{N}(0,1)}\left[\| \epsilon-\epsilon _\theta\left(\boldsymbol{x}_t, \boldsymbol{t}, \boldsymbol{c}_t, \boldsymbol{c} _{\mathrm{f}}\right) \|_2^2\right]. $$ In our work, we propose directly estimate $\nabla _{\mathbf{x}_t} \log \mathbb{E} _{p(\mathbf{x}_0|\mathbf{x}_t, \boldsymbol{c}_t)}\left[\frac{q(\mathbf{x}_0, \boldsymbol{c}_t, \boldsymbol{c} _{\mathrm{f}})}{p(\mathbf{x}_0, \boldsymbol{c}_t)}\right]$ rather than fine-tuning by noise matching objective. We can still use domain classifier $c_w(x, y)$ to estimate the density ratio, where $y$ denotes the embedding of the condition. Moreover, we would like to discuss some similarities and differences between our work and [1,2]. Directly fine-tuning the diffusion model on data from the target domain is similar to the consistency regularization proposed in our work, while [1,2] has a more in-depth design for the architecture and has great results on vision-language tasks. However, with limited data from the target distribution, direct fine-tuning may not achieve good enough performance, which we verify in a two-dimensional Gaussian setting. Q7: Explanation of Figure 3. Why does using the same encoder learn on target samples lead to similar T-SNE visualizations, although the diversity and fidelity are improved? Are there any results following "change the encoder to the learned classifier"? Thank you very much for your question. When computing FID/Diversity, we use the same encoder to get the feature map, which is the common practice for fair comparison. Therefore, we use the same feature map for T-SNE visualizations, which are generated by the same encoder. For the downstream task, we train a separate encoder for classification. --- Rebuttal 3: Title: reminder to authors to reply before the end of the author/reviewer discussion period Comment: Hello authors of the paper, It would be great if you could respond to reviewer UUwS's most recent comments before the author/reviewer discussion period closes (at Aug 13, 11:59pm AoE). Thanks, Your AC --- Rebuttal Comment 3.1: Comment: We thank AC for the kind reminder. We appreciate the time you spent on the paper. --- Rebuttal 4: Comment: Thanks for your response and my concerns are addressed. I appreciate the contribution of TGDP in motivation novelty and theoretical soundness, but yet expect the scope to be more clear (since the title is *Transfer Learning*), and believe more discussions about TGDP on other fine-tuning scenarios could make the paper better. Above all, I raise my score to 5, i.e., a borderline score.
Summary: The paper proposes an approach for transfer learning based on score-based generative models and density-ratio estimation. The authors show that in order to transfer a trained diffusion model from a source to a target domain, all that is needed is the expectation of density ratios of source and target domains. This expectation can be learned using neural networks reducing the problem of learning a generative model on the target domain to a regression/classification task. Strengths: - As far as I can judge, the proposed method for transfer learning is both original and significant. It should make a good contribution to the research community. - In addition to the core of the method, the proposed regularizers that improve efficiency are in my opinion conceptually convincing. - The paper is well written and easy to follow. - The experimental evaluation on the ECG data is in my opinion well executed and convincing. Weaknesses: - While the method is in my opinion theoretically convincing, the experimental section and ablations are a bit lackluster which decreases confidence in the method (even though I acknowledge the complexity of the task itself). - The first experiment on a 2D Gaussian illustrates the performance of the method in comparison to baselines decently, but it is in my opinion ultimately too trivial to be really meaningful (since the density ratios of the 2 Gaussian mixtures can be easily learned, because they are well separated, I would expect that a method based on learning density ratios outperforms baselines here). The ablation in Figure~4 in the appendix is conducted too superficially. It consists of scatterplots, no real quantitative analysis, and doesn't evaluate all relevant loss terms of the model. - The evaluation on the ECG data is not fully clear, since meta-information, such as sampling rate, and other details are not given (as far as I can tell). - In my opinion, a quantitative evaluation on at least 2-3 more high-dimensional real-world data sets should have been conducted in addition. Technical Quality: 3 Clarity: 3 Questions for Authors: - Figure 1 shows the data as scatterplot which makes it impossible to see the learned density. Since, the true densities are known, a representation using kernel density estimates would be better to see if the learned model aligns with the truth. - For readers who do not know what a "12-lead ECG data" is, the supplement should contain this information (at least a rough overview). How does 12-lead ECG data look? Are they one-dimensional time series? At what rate are they sampled? - Task 4.2.2 is a bit unclear. The authors should be more detailed what the goal is and what they are exactly doing (even though one can infer it from the text). - It would have been nice to see the ablation for all combinations that contain $\mathcal{L}\_{\text{guidance}}$, and in particular $\mathcal{L}\_{\text{guidance}}$ alone (and not $\mathcal{L}\_{\text{consistence}}$). In this state, it is not clear what, e.g., $\mathcal{L}\_{\text{cycle}}$ or $\mathcal{L}\_{\text{consistance}}$ contribute separately. - How are the hyper-parameter $\eta$ chosen? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: - A statement on limitations regarding empirical evaluation exist. - Possible negative societal impact, e.g., regarding generation of deepfakes using transfer learning, is acknowledged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. We appreciate the time you spent on the paper. Below we address the concerns and comments that you have provided. **Q**: *The first experiment on a 2D Gaussian illustrates the performance of the method in comparison to baselines decently, but it is in my opinion ultimately too trivial to be really meaningful (since the density ratios of the 2 Gaussian mixtures can be easily learned, because they are well separated, I would expect that a method based on learning density ratios outperforms baselines here).* **A**: Thank you very much for this question. We agree that the two-dimensional Gaussian mixture setup is a relatively easy task and serves as a proof-of-concept numerical example to validate the proposed method. To further demonstrate the effectiveness of our method, we have conducted experiments on real-world ECG datasets, and on image datasets in the "general" response above and Table 1 in the attached file therein. Additionally, we would like to highlight a challenge associated with the two-dimensional Gaussian example: when two distributions differ significantly, the density ratio estimator struggles to accurately estimate the density ratio. Due to the well-separated nature of the distributions, the density ratio at some points can be as large as $10^5$. One key takeaway from our work is that our method is robust to the magnitude of the estimated density ratio. **Q**: *The ablation in Figure~4 in the appendix is conducted too superficially. It consists of scatterplots, no real quantitative analysis, and doesn't evaluate all relevant loss terms of the model. It would have been nice to see the ablation for all combinations that contain $\mathcal{L} _{\text{guidance}}$, and in particular $\mathcal{L} _{\text{guidance}}$ alone (and not $\mathcal{L} _{\text{consistence}}$). In this state, it is not clear what, e.g., $\mathcal{L} _{\text{cycle}}$ or $\mathcal{L} _{\text{consistence}}$ contribute separately.* **A**: Thank you very much for this suggestion. We refer to the “global” response and Figure 1 in the attached file. **Q**: *The evaluation on the ECG data is not fully clear, since meta-information, such as sampling rate, and other details are not given (as far as I can tell). For readers who do not know what a "12-lead ECG data" is, the supplement should contain this information (at least a rough overview). How does 12-lead ECG data look? Are they one-dimensional time series? At what rate are they sampled?* **A**: Thank you very much for this suggestion. A 12-lead ECG (electrocardiogram) refers to the 12 different perspectives of the heart's electrical activity that are recorded, which is 12-dim time series data. We use the data from PTB-XL dataset and ICBEB2018 dataset at a sampling frequency of 100 Hz, which means 100 samples per second. We include this necessary information in the revised version in section 4.2. **Q**: *How are the hyper-parameter $\eta$ chosen?* **A**: Thank you very much for this question. The guidance term is calculated based on data from the source distribution, while two regularization terms are calculated based on data from the target distribution. To choose a good $\eta_1$ and $\eta_2$, we must take the number of samples in the source $m$ and target distribution $n$ into consideration. Therefore, we initially set $\eta_1= \eta_2 = \sqrt{n/m}$ and rely on the grid search to determine the $\eta_1$ and $\eta_2$. **Q**: *A quantitative evaluation on at least 2-3 more high-dimensional real-world data sets should have been conducted in addition.* **A**: Thank you very much for this suggestion. We refer to the "general" response and Table 1 in the attached file. **Q**: *Task 4.2.2 is a bit unclear. The authors should be more detailed what the goal is and what they are exactly doing (even though one can infer it from the text).* **A**: Thank you very much for this suggestion. We include a paragraph in section 4.2.2 to describe the ECG classification task in the revised version. --- Rebuttal 2: Comment: Thank you for the response to my review and comments. I believe the method would make good contribution to the community, but a sparse and not revised experimental section prohibit a higher score. Having read the reviews of the other reviewers, I agree that the method clearly needs more evaluations.
Summary: introduces a novel approach called Transfer Guided Diffusion Process (TGDP) for transferring knowledge from a pre-trained diffusion model in the source domain to a target domain with limited data. The authors present the methodology for transferring knowledge, including the formulation of the guidance network and its learning process. They also extend TGDP to a conditional version for joint data and label distribution modeling. Strengths: 1. TGDP offers a new perspective on transferring knowledge from pre-trained diffusion models to new domains with limited data. The whole framework is innovative and reasonable. 2. The paper provides a solid theoretical basis for TGDP, including proofs for the optimality of the approach. Weaknesses: 1. More experiments on real-world datasets are required to further validate the effectiveness of the proposed framework. 2. The computational cost of training the guidance network and the diffusion model should be discussed. 3. The paper primarily validates TGDP on Gaussian mixture simulations and ECG datasets. Its performance on other types of data or domains is not empirically tested or discussed. Technical Quality: 3 Clarity: 3 Questions for Authors: In what scenarios does the proposed framework work best? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors have discussed limitations well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and suggestions. We appreciate the time you spent on the paper. Below we address the concerns and comments that you have provided. **Q**: *The paper primarily validates TGDP on Gaussian mixture simulations and ECG datasets. Its performance on other types of data or domains is not empirically tested or discussed. More experiments on real-world datasets are required to further validate the effectiveness of the proposed framework.* **A**: Thank you very much for your comments. We refer to the "general" response and Table 1 in the attached file. **Q**: *The computational cost of training the guidance network and the diffusion model should be discussed.* **A**: Thank you very much for this question. We provide the computation cost of training in Table 2 on page 9. The number of parameters in the guidance network (domain classifier) is much less than that of the diffusion model, which reduces the computational costs against fine-tuning based methods. **Q**: *In what scenarios does the proposed framework work best?* **A**: Thank you very much for this question. In this work, we prove that using the guidance $\nabla _{\mathbf{x} _t} \log \mathbb{E} _{p(\mathbf{x}_0|\mathbf{x}_t)}\left[q(\mathbf{x}_0)/p(\mathbf{x}_0)\right]$ (or $\nabla _{\mathbf{x}_t} \log \mathbb{E} _{p(\mathbf{x}_0|\mathbf{x}_t, y)}\left[q(\mathbf{x}_0, y)/p(\mathbf{x}_0, y)\right]$ for conditional generative model), we can generate the samples from target distribution with the source pre-trained generative model. Therefore, The performance of our method depends significantly on the accuracy of the estimated guidance network, particularly the estimated density ratio between the target and source distributions. When this density ratio is accurately estimated (especially when the relative magnitudes are accurately captured), our method achieves optimal performance. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My concerns have been properly addressed and I decide to raise my score to 7.
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. We appreciate the time you spent on the paper. We summarize the positive feedback that we received as follows: Motivation and Novelty: The whole framework is innovative and reasonable (9nph); the proposed method for transfer learning is both original and significant. It should make a good contribution to the research community (53Zj); Provides a novel perspective on knowledge transfer for pre-trained diffusion models (UUwS); this paper introduce a novel training paradigm and the proposed guidance term proposed in this paper is a technique worthy of reference and further in-depth study by researchers (brAM). Theoretical solid: The paper provides a solid theoretical basis for TGDP, including proofs for the optimality of the approach (9nph); the proposed regularizers that improve efficiency are in my opinion conceptually convincing (53Zj); theoretical foundation of the proposed method is well-constructed (UUwS); The theoretical analysis of the robustness is interesting and well described, which helps to understand the proposed techniques (brAM). Writing Quality: The paper is well written and easy to follow (53Zj); the paper is well-organized with clear presentation (UUwS); The paper is well-written and easy to understand (brAM). Effectiveness: The experimental evaluation on the ECG data is in my opinion well executed and convincing (53Zj); Experimental results show the effectiveness of the proposed method (brAM). Secondly, we aim to address the common weaknesses and concerns raised by the reviewers. Firstly, all reviewers suggest providing additional experimental results on various data types and real-world datasets beyond the electrocardiogram (ECG) benchmark. Therefore, we conducted a preliminary experiment using an image dataset to verify the effectiveness of the proposed method. Given the time constraints and the computational cost associated with training diffusion models on large datasets (e.g., DomainNet) or fine-tuning/evaluating conditional diffusion models on language-vision tasks, we selected the MNIST to one-channel Street View House Numbers (SVHN) dataset for a preliminary verification. Our method achieves a lower Fréchet inception distance (fid) than the two baseline methods, which can be found in Table 1 in the attached file. Although the digit dataset is relatively simple, we believe these experimental results can still demonstrate the effectiveness of the proposed method. Moreover, Reviewer 53Zj and UUwS point out that the ablation study on the effectiveness of two regularization terms is not comprehensive enough. The key reason we do not provide the ablation for all combinations involving $\mathcal{L} _{\text{guidance}}$ is that we considered $\mathcal{L} _{\text{consistence}}$ as an intuitive solution for the transfer setting, while $\mathcal{L} _{\text{guidance}}$ and $\mathcal{L} _{\text{cycle}}$ represent the key contributions of our proposed methods. Therefore, in the ablation study, we verify that using $\mathcal{L} _{\text{consistence}}$ alone does not yield a sufficiently effective solution. To better illustrate the contributions of the three terms, we provide the ablation studies, $\mathcal{L} _{\text{guidance}}$ and $\mathcal{L} _{\text{guidance}} + \mathcal{L} _{\text{cycle}}$, in Table 1 in the attached file. Finally, we summarize our individual responses to each reviewer below. We appreciate the questions raised by the reviewers, which have helped improve the clarity of our paper. Reviewer 9nph: We clarified the computational advantages of training the guidance network compared to fine-tuning the diffusion model. Additionally, we discussed the scenarios in which our method performs well. Reviewer 53Zj: We enhanced the clarity of our paper by including detailed information about the 12-lead ECG data, the experimental setup, and the preliminary theoretical insights behind choosing the hyperparameters for the regularization terms. Reviewer UUwS: We clarified the applicability of the proposed framework, demonstrating its use for transferring text-to-image/video diffusion models and transferring knowledge across different label spaces. We also provided additional insights on Table 1 and Figure 3. Reviewer brAM: We improved the clarity of the conditional version of our framework by providing the necessary lemma along with the proof. We believe these revisions address the concerns raised by the reviewers and have improved the overall clarity and strength of our paper. Should you have any further questions, please do not hesitate to let us know. Sincerely, TGDP Authors. Pdf: /pdf/40123f36e5d5e87283fd8eab5b023835dbba82bb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift
Accept (poster)
Summary: The paper addresses the significant challenge of data heterogeneity and distributed concept drift in federated learning. The authors propose a novel framework, which integrates classifier clustering and feature alignment to improve model performance and collaboration among clients facing different concept drifts. The key contributions of the paper are as follows: 1. FedCCFA Framework: The proposed framework includes a method for clustering local classifiers at the class level and generating clustered feature anchors to enhance feature alignment. This clustering helps clients with similar data distributions share classifiers, thus improving the generalization performance of the global model. 2. Feature Alignment: The framework introduces an adaptive feature alignment technique that aligns clients' feature spaces based on the entropy of the label distribution. This method helps to alleviate the inconsistencies in feature space due to data heterogeneity. 3. Experimental Validation: Extensive experiments demonstrate that FedCCFA significantly outperforms existing federated learning methods under various concept drift settings. The results show that FedCCFA effectively adapts to distributed concept drift and enhances generalization performance. Strengths: 1. Comprehensive Validation: The research is supported by well-designed experiments and thorough ablation studies, demonstrating the effectiveness of the FedCCFA framework. The mathematical formulations and algorithms are clearly presented, and the results are statistically significant. 2. Clear Presentation: The paper is clearly written and well-structured, making complex concepts accessible. Visual aids, such as graphs and tables, effectively support the explanations. Detailed breakdowns of the experimental setup enhance the reproducibility of the results. 3. Significant Impact: This work addresses a critical gap in federated learning, with potential applications in diverse fields like healthcare, finance, and mobile device collaboration. By improving the adaptability and generalization performance of federated models, the proposed FedCCFA framework provides a robust foundation for future studies to build upon, advancing the current state of federated learning research. Weaknesses: 1. Computation Overhead: The proposed FedCCFA framework involves additional computational steps to train balanced classifiers, which increases the overall computational cost. While the authors attempt to mitigate this by setting small iterations and batch sizes, exploring more efficient methods to achieve balanced classifier training would be beneficial. 2. Limited Evaluation Scenarios: The experiments are primarily conducted on standard datasets. Including more diverse datasets, especially those with real-world distributed concept drift scenarios, would strengthen the validation of the framework's general applicability and robustness. 3. Sensitivity to Hyperparameters: The effectiveness of FedCCFA relies on several hyperparameters, such as the maximum distance in DBSCAN and the scaling factor. While the authors provide some tuning, a more thorough analysis of the sensitivity and robustness to these hyperparameters across different datasets and settings would be valuable. 4. Handling Extreme Data Heterogeneity: The paper addresses data heterogeneity, but extreme cases of data heterogeneity can still pose challenges, as noted with gradient explosions in some scenarios. Further discussion and potential solutions to handle such extreme cases more effectively would improve the robustness of the framework. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Computational Overhead Have you explored alternative methods for balanced classifier training to reduce computational overhead? It’s better to consider leveraging advanced optimization techniques or lightweight pre-processing steps to achieve balanced classifier training more efficiently. 2. Evaluation on Diverse Datasets Do you plan to evaluate FedCCFA on more diverse, real-world datasets reflecting practical distributed concept drift scenarios? It’s better to extend evaluation to real-world datasets would provide a comprehensive understanding of the framework's applicability and robustness. 3. Sensitivity Analysis of Hyperparameters Have you conducted a thorough sensitivity analysis of key hyperparameters across different datasets and settings? A detailed sensitivity analysis could help understand the impact of these hyperparameters on performance and provide guidelines for their selection. 4. Handling Extreme Data Heterogeneity What solutions have you considered for effectively handling extreme cases of data heterogeneity and preventing gradient explosions? It’s better to investigate adaptive methods that dynamically adjust based on real-time training stability monitoring to handle extreme data heterogeneity more robustly Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. Adaptability to Various Drift Patterns: The paper evaluates performance under specific concept drift scenarios but may not cover all possible drift patterns. Expanding the evaluation to include a wider variety of drift patterns would provide a more comprehensive understanding of the framework’s adaptability. 2. Real-World Implementation: The paper demonstrates effectiveness in controlled experimental settings. Discussing potential challenges and solutions for deploying FedCCFA in real-world environments, including scalability and communication efficiency, would strengthen the paper. 3. Detailed Analysis of Classifier Clustering: The paper proposes class-level classifier clustering but provides limited analysis on the clustering's dynamics and potential pitfalls. A deeper analysis of how clustering decisions impact overall model performance and stability would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and thoughtful comments. Below we address specific questions: > W1 & Q1: Computation Overhead: The proposed FedCCFA framework involves additional computational steps to train balanced classifiers [...] exploring more efficient methods to achieve balanced classifier training would be beneficial. A1: Thank you for your insightful comments. We appreciate your suggestion to consider more efficient alternatives. In our current work, balanced classifier is trained to prevent classifier bias from misleading client clustering. It is preferable to address classifier bias during local training, such as CCVR [1], FedETF [2] and FedBR [3]. However, (1) CCVR requires virtual representations sampled from an approximated gaussian mixture model and re-trains classifier using these representations; (2) FedETF utilizes a fixed ETF classifier to address classifier bias, which requires finetuning and is not suitable for concept drift setting; (3) FedBR generates pseudo-data to reduce bias in local classifiers, which also requires additional computation cost. We agree that future work could explore more efficient methods, and we plan to investigate this in subsequent studies. For example, some global information (e.g., global feature anchors) can be used to calibrate the local classifier. > W2 & Q2: Limited Evaluation Scenarios: The experiments are primarily conducted on standard datasets. Including more diverse datasets [...] would strengthen the validation of the framework's general applicability and robustness. A2: Thank you for your constructive comments. We agree that testing on real-world data is crucial for demonstrating the practical applicability of our method. Unfortunately, there are currently no datasets that fit the specific requirements of our study, since publicly available real-world datasets usually assume that $P(\mathcal{Y}|\mathcal{X})$ is invariant. Therefore, existing works focusing on concept drift in federated learning (e.g., FedDrift [4] and Flash [5]) use synthetic datasets. Note that FMoW dataset used in FedDrift does not contain the changes in $P(\mathcal{Y}|\mathcal{X})$, as described in Section 2.1 of the original paper. > W3 & Q3: Sensitivity to Hyperparameters: The effectiveness of FedCCFA relies on several hyperparameters, [...] a more thorough analysis of the sensitivity and robustness to these hyperparameters across different datasets and settings would be valuable. A3: Thank you for your valuable comments. We have conducted additional experiments to thoroughly analyze the sensitivity of key hyperparameters across different datasets and concept drift settings. Experimental results show that $\gamma$ ranging from 10 to 50 tends to perform better, and $\epsilon \le 0.1$ is better. | | $\gamma$ = 5 | 10 | 20 | 50 | 100 | |---|---|---|---|---|---| |Fashion-MNIST (no drift) | 89.84$\pm$0.11 | 89.63$\pm$0.23 | 89.74$\pm$0.18 | 89.70$\pm$0.19 | 89.37$\pm$0.07 | |Fashion-MNIST (sudden drift) | 89.73$\pm$0.32 | 89.00$\pm$0.19 | 89.39$\pm$0.14 | 89.01$\pm$0.17 | 89.15$\pm$0.22 | |CIFAR-10 (no drift) | 75.19$\pm$0.15 | 74.97$\pm$0.48 | 74.44$\pm$0.26 | 73.32$\pm$0.06 | 73.47$\pm$0.41 | |CIFAR-10 (sudden drift) | 70.94$\pm$0.42 | 73.17$\pm$0.77 | 73.21$\pm$0.82 | 72.25$\pm$0.32 | 71.45$\pm$0.68 | | | $\epsilon$ = 0.05 | 0.1 | 0.15 | 0.2 | |---|---|---|---|---| |Fashion-MNIST (no drift) | 89.13$\pm$0.29 | 89.74$\pm$0.18 | 89.53$\pm$0.10 | 89.76$\pm$0.16 | |Fashion-MNIST (sudden drift) | 88.29$\pm$0.41 | 89.39$\pm$0.14 | 89.13$\pm$0.34 | 89.37$\pm$0.15 | |CIFAR-10 (no drift) | 74.00$\pm$0.18 | 74.44$\pm$0.26 | 74.18$\pm$0.13 | 73.28$\pm$0.69 | |CIFAR-10 (sudden drift) | 72.99$\pm$0.78 | 73.21$\pm$0.82 | 71.40$\pm$0.84 | 69.33$\pm$0.50 | > W4 & Q4: Handling Extreme Data Heterogeneity: The paper addresses data heterogeneity, but extreme cases of data heterogeneity [...] Further discussion and potential solutions to handle such extreme cases more effectively would improve the robustness of the framework. A4: Thank you for your insightful comments. In FL, different clients may suffer from various level of data heterogeneity. Requiring all clients to use the same hyperparameters for local training is not suitable. To address this problem, we propose adaptive alignment weight. We agree that other adaptive methods that dynamically adjust based on real-time training stability monitoring are more helpful to address this problem. An intuition is gradient clipping. According to our observations, gradient clipping can effectively address the gradient explosions when using FedFM, FedPAC, etc. Since the main focus in this work is distributed concept drift adaptation, we will investigate other adaptive methods to improve the performance under severe heterogeneity. **References:** [1] Luo, M., Chen, F., Hu, D., Zhang, Y., Liang, J., & Feng, J. (2021). No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Advances in Neural Information Processing Systems, 34, 5972-5984. [2] Li, Z., Shang, X., He, R., Lin, T., & Wu, C. (2023). No fear of classifier biases: Neural collapse inspired federated learning with synthetic and fixed classifier. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 5319-5329). [3] Guo, Y., Tang, X., & Lin, T. (2023, July). Fedbr: Improving federated learning on heterogeneous data via local learning bias reduction. In International Conference on Machine Learning (pp. 12034-12054). PMLR. [4] Jothimurugesan, E., Hsieh, K., Wang, J., Joshi, G., & Gibbons, P. B. (2023, April). Federated learning under distributed concept drift. In International Conference on Artificial Intelligence and Statistics (pp. 5834-5853). PMLR. [5] Panchal, K., Choudhary, S., Mitra, S., Mukherjee, K., Sarkhel, S., Mitra, S., & Guan, H. (2023, July). Flash: concept drift adaptation in federated learning. In International Conference on Machine Learning (pp. 26931-26962). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, and I will raise my score.
Summary: This paper explores the impact of distributed concept drift on federated learning (FL), and proposes a novel FL framework, FedCCFA, to adapt to distributed concept drift with data heterogeneity (i.e., label distribution shift). Extensive experiments demonstrated the effectiveness and generality of the proposed method. Strengths: 1. Federated learning (FL) under concept drift holds significant academic and practical value. The method proposed in this paper is both reasonable and effective. 2. The experiments are extensive, not only comparing the proposed method with SOTA methods but also simulating various types of drift. This demonstrates the method's generality across different drift scenarios. 3. The paper is well-organized and well-written, enhancing the clarity and impact of its findings. Weaknesses: 1. In the introduction, the authors use the distribution of medical images as an example to describe data heterogeneity. However, the actual task addressed in the paper is label distribution shift, which does not align with the previous example. It is recommended that the authors directly use label distribution shift to describe the problem addressed in the paper 2. The authors calculate the global feature anchor by averaging (i.e., Eq. (7)). However, given the presence of distributed concept drift among different clients, simply averaging may be unreasonable. It raises the question of whether concept drift adaptation should be considered. 3. The adaptive alignment weight is calculated based on the entropy of the label distribution. However, there lacks the relevant motivation, references, and theoretical justification for this strategy. 4. Federated learning (FL) under distributed concept drift primarily addresses concept drift across clients and over time. This is similar to the research problem of multistream classification under concept drift [1-3]. It is recommended that the authors discuss these works to enhance readers' understanding of related research. [1] An adaptive framework for multistream classification, CIKM 2016 [2] Fusion: An online method for multistream classification, CIKM 2017 [3] Online boosting adaptive learning under concept drift for multistream classification, AAAI 2024 5. The paper does not provide open-source code and datasets, which compromises the reproducibility of the experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: discussed in weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Below we address specific questions. > W1: In the introduction, the authors use the distribution of medical images [...] It is recommended that the authors directly use label distribution shift to describe the problem addressed in the paper. A1: We believe there may have been a misunderstanding regarding the primary problem our research tackles. To clarify, the primary problem our paper addresses is distributed concept drift with data heterogeneity, not just label distribution shift. We guess that the "label distribution shift" you mentioned is "data heterogeneity" in our paper, where the marginal distribution $P(\mathcal{Y})$ varies across clients. In our paper, we consider a new problem named distributed concept drift. Specifically, the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ may vary across clients (e.g., diagnoses can vary among doctors) and change over time (e.g., a doctor may offer different diagnoses at different times). These two kinds of changes constitute distributed concept drift. As for data heterogeneity, we think it is a common setting for federated learning, so we conduct experiments under two Dirichlet distribution (Dir(0.1) and Dir(0.5)) to simulate data heterogeneity. > W2: The authors calculate the global feature anchor by averaging (i.e., Eq. (7)). [...] It raises the question of whether concept drift adaptation should be considered. A2: Thank you for your valuable feedback. We would like to clarify that this issue has already been addressed in our manuscript. As detailed in "Clustered feature anchors" in Section 4.2, given the presence of distributed concept drift, simply averaging clients' local anchors could generate incorrect global anchors, which can mislead feature alignment. To address this problem, we propose clustered feature anchors (i.e., $\mathcal{A}\_{m, c}^{(t)}$ in Eq. (7)), where the local anchors are averaged "for each cluster $\mathcal{S}\_{m, c}^{(t)}$". Since the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ of clients in the same cluster is similar, the "clustered feature anchors" can correctly guide the feature alignment. The experimental results in Table 4 demonstrate the efficacy of our clustered feature anchors. > W3: The adaptive alignment weight is calculated based on the entropy of the label distribution. However, there lacks the relevant motivation, references, and theoretical justification for this strategy. A3: Thank you for your constructive comments. In our ablation study on alignment weight (Table 5), we observed that alignment weight should be reduced under severe heterogeneity. Then, we propose adaptive alignment weight based on the intuition that the degree of data heterogeneity can be reflected by the entropy of marginal distribution $P(\mathcal{Y})$. This motivation can be supported by Shannon's information theory [1]. Besides, He and Garcia [2] also note that imbalanced distributions result in lower entropy because the majority class dominates, reducing the unpredictability of the outcomes. We will clarify this part of our manuscript. We acknowledge the importance of providing a thorough theoretical analysis for our adaptive alignment weight. However, our current work primarily aims to address the feature alignment under distributed concept drift. Further theoretical analysis and other adaptive method will be considered in subsequent studies. > W4: Federated learning (FL) under distributed concept drift [...] This is similar to the research problem of multistream classification under concept drift [1-3]. It is recommended that the authors discuss these works [...]. A4: We appreciate your pointer to these related works [3-5]. These works focus on multistream classification, which involves two independent non-stationary data generating processes (i.e., source stream and target stream). A sampling bias may exist between the distributions represented by these two streams of data. That is, training and test data distributions are not guaranteed to be similar. Different from multistream classification, distributed concept drift focus on the changing conditional distribution $P(\mathcal{Y}|\mathcal{X})$ across clients and over time. For each client at any round, training and test data distributions are assumed to be similar. We will cite these works and discuss the relationship in our final version, as you suggested. > W5: The paper does not provide open-source code and datasets, which compromises the reproducibility of the experiments. A5: Thank you for your valuable feedback. According to the FAQ of NeurIPS 2024, it is not allowed to add any link in any part of the rebuttal. We will provide our code once this paper is accepted. **References:** [1] Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423. [2] He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9), 1263-1284. [3] Chandra, S., Haque, A., Khan, L., & Aggarwal, C. (2016, October). An adaptive framework for multistream classification. In Proceedings of the 25th ACM international on conference on information and knowledge management (pp. 1181-1190). [4] Haque, A., Wang, Z., Chandra, S., Dong, B., Khan, L., & Hamlen, K. W. (2017, November). Fusion: An online method for multistream classification. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 919-928). [5] Yu, E., Lu, J., Zhang, B., & Zhang, G. (2024, March). Online boosting adaptive learning under concept drift for multistream classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 15, pp. 16522-16530). --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. All of my concerns have been addressed, and I will increase the score to 6.
Summary: The paper proposes a federated learning framework called FedCCFA, which addresses the challenges posed by distributed concept drift and data heterogeneity. The authors introduce innovative solutions such as classifier clustering and adaptive feature alignment to enhance collaboration and improve model performance under various concept drift settings. The authors demonstrate the effectiveness of their approach through experimental results showing significant improvements over existing methods. Strengths: 1. The paper tackles an important and relatively unexplored problem in federated learning, namely concept drift and data heterogeneity. 2. The extensive experimental results demonstrate the effectiveness of FedCCFA, showing significant performance improvements over existing methods in various concept drift scenarios. Weaknesses: 1. I am unable to fully understand the motivation of the paper. Figure 1 does not clearly illustrate how Decoupled Clustering addresses the real drift problem compared to previous decoupled methods. While it shows better performance than FedAvg and pure Decoupled methods, the underlying reasons remain unclear. Why does introducing clustering improve performance? 2. The writing of the paper can be improved. For example, the specific meaning of \(\phi_{i,c}\) is not explained, leading to significant confusion in understanding equation (4). The authors need to explain each symbol to enhance the readability of the paper. Additionally, the term "balanced batch" in line 160 needs to be explained, specifically how it was sampled. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In the experiments, which classifiers were ultimately clustered in the class-level clustering? What does the final distance matrix look like? 2. Equation (5) uses uniform aggregation to avoid privacy concerns. Why does this method avoid privacy concerns? Could taking the average directly lead to performance loss in classifiers? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors point out that FedCCFA requires training balanced classifiers, which will increase computational complexity to some extent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and kind words to our work. Below we address specific questions: > W1: I am unable to fully understand the motivation of the paper. Figure 1 does not clearly illustrate how Decoupled Clustering [...] the underlying reasons remain unclear. Why does introducing clustering improve performance? A1: Thank you for your constructive feedback. We will clarify this part of our manuscript. When real drift occurs, decoupled methods can adapt to this drift by training the classifier while fixing the extractor. Each local classifier learns corresponding conditional distribution $P(\mathcal{Y}|\mathcal{X})$. However, some clients may share similar $P(\mathcal{Y}|\mathcal{X})$, and pure decoupled methods neglect the fine-grained collaboration between local classifiers. This will make each client's local classifier overfit to its local data. To address this problem, we propose a novel class-level classifier clustering method, which aggregates the class classifiers within the same cluster. This aggregation reduces the bias introduced by any single client's data, contributing to improved generalization performance. That is, individual local classifiers trained on local data exhibit biased parameter estimation, while the aggregation of multiple classifiers mitigates these biases and improves the overall parameter estimation. > W2: The writing of the paper can be improved. For example, the specific meaning of $\boldsymbol{\phi}\_{i,c}$ is not explained, [...] Additionally, the term "balanced batch" in line 160 needs to be explained, specifically how it was sampled. A2: Thank you for your valuable feedback, and we will incorporate the suggestions into our updated draft. 1. The meaning of $\{\hat{\boldsymbol{\phi}}\_{k, c}^{(t)}\}$ : At round $t$, the server separates each client $k$'s balanced classifier $\hat{\boldsymbol{\phi}}\_k^{(t)}$ for each class $c$ ( referred to as class classifier $\hat{\boldsymbol{\phi}}\_{k, c}^{(t)}$ ). Then, for each class $c$, the server measures the class-level distances $\mathcal{D}\_c(i, j)$ between clients $i$ and $j$ by their class classifiers $\hat{\boldsymbol{\phi}}\_{i, c}^{(t)}$ and $\hat{\boldsymbol{\phi}}\_{j, c}^{(t)}$. 2. The meaning of "balanced batch": To prevent class imbalance from misleading classifier clustering, FedCCFA uses a balanced batch $b\_k^{(t)}$ to train a balanced classifier $\hat{\boldsymbol{\phi}}\_k^{(t)}$ for distance estimation. To reduce computation cost, we randomly and uniformly select 5 samples for each class $c \in [C]$, i.e., the balanced batch size $|b\_k^{(t)}|$ is $5 \* C$. > Q1: In the experiments, which classifiers were ultimately clustered in the class-level clustering? What does the final distance matrix look like? A3: In our experiments, for each class $c \in [C]$, all selected clients' class classifiers $\{\hat{\boldsymbol{\phi}}\_{k, c}^{(t)}\}$ are used for class-level clustering, where $k \in \mathcal{I}^{(t)}$. That is, for each class $c$, the server computes a $|\mathcal{I}^{(t)}| \* |\mathcal{I}^{(t)}|$ distance matrix $\mathcal{D}\_c$ to measure the distance between each client's class classifier. Please see the attached PDF in the top-level comment for the visualization of distance matrix. > Q2: Equation (5) uses uniform aggregation to avoid privacy concerns. Why does this method avoid privacy concerns? Could taking the average directly lead to performance loss in classifiers? A4: FedCCFA involves class-level classifier aggregation (Eq. (5)) and class-level feature anchor aggregation (Eq. (7)). An intuitive manner is sample-number-based weighted aggregation, which is similar to the model aggregation in FedAvg. However, this manner requires clients to upload the sample number for each class, which may leak privacy about clients' category distributions. To address this problem, we consider the uniform aggregation, where all clients share the same aggregation weight. This aggregation manner does not involve other private information. We compare the two manners of aggregation. Interestingly, we observed that uniform aggregation even performs better, especially under concept drift setting. This may be because that uniform aggregation can prevent the classifier from overfitting to the classifier that is significantly biased due to data heterogeneity. | | Fashion-MNIST | CIFAR10 | CINIC-10 | | ------------- | -------- | -------- | ----| | no drift (weighted) | 89.39$\pm$0.16 | 73.06$\pm$0.69 | 57.41$\pm$0.12 | | no drift (uniform) | **89.74$\pm$0.18** | **74.44$\pm$0.26** | **58.70$\pm$0.33** | | sudden drift (weighted) | 89.14$\pm$0.31 | 66.49$\pm$1.21 | 45.60$\pm$0.38 | | sudden drift (uniform) | **89.39$\pm$0.14** | **73.21$\pm$0.82** | **52.62$\pm$0.51** | Thank you for your valuable comments. We will incorporate these results into our final version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response and additional experiments. My concerns have been adequately addressed. I will keep my positive score.
Summary: This paper proposes to consider two cross-coupled important issues, i.e., federated learning and concept drift. To address such a challenging problem, the FedCCFA framework has been proposed composed of classifier clustering and feature alignment modules. The former is designed to cope with concept drift and enhance the generalization performance of the model. The latter is utilized to prevent data heterogeneity of clients. The overall design is sound, interesting, and intuitive. The whole paper is well-organized and well-written. The experimental evaluation is comprehensive and convincing. Strengths: - A relatively new and important problem of federated learning under concept drift is addressed. - Contributions of this work are in the framework perspective and the module/strategy design perspective. - Both classifier clustering and feature alignment are sound and interesting ideas. - The whole paper is well-organized. - Comprehensive evaluation comparing many counterparts (including SOTAs) on benchmark datasets. Weaknesses: - Lack of a framework pipeline to intuitively explain the overall design. - Time complexity analysis is missing. - Visual classification results on the datasets are preferable. - The source code is not opened. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and kind words to our work. Below we address specific questions: > W1: Lack of a framework pipeline to intuitively explain the overall design. A1: Thank you for your valuable feedback. Please see the attached PDF in the top-level comment for our framework pipeline. We will incorporate this pipeline into our final version. > W2: Time complexity analysis is missing. A2: Thank you for your constructive comment. We conducted a comprehensive time complexity analysis at client side and compared our FedCCFA to other methods, including FedFM [1], FedRep [2] and FedPAC [3]. $E$ and $B$ denotes the epoch and batch num, respectively. In our FedCCFA, we train a balanced classifier for $s$ iteration with a batch of data. $m$, $f$ and $c$ denotes the computation cost for training local model, extractor and classifier, respectively. Note that, to reduce computation cost, $s$ is small. | Method | Complexity | | --------- | ------------- | | FedFM | O(E\*B\*m) | | FedRep | O(E\*B\*f+B\*c) | | FedPAC | O(E\*B\*f+B\*c) | | FedCCFA | O(E\*B\*f+B\*c+s\*c) | In particular, at server side, our classifier collaboration method is more efficient than that of FedPAC, which can be demonstrated by the results in Appendix B.6. > W3: Visual classification results on the datasets are preferable. A3: Thank you for your valuable feedback. Please see the attached PDF in the top-level comment for the visualization of distance matrix. We think it can be helpful to understand our clustering method. > W4: The source code is not opened. A4: Thank you for your valuable feedback. According to the FAQ of NeurIPS 2024, it is not allowed to add any link in any part of the rebuttal. We will provide our code once this paper is accepted. **References:** [1] Ye, R., Ni, Z., Xu, C., Wang, J., Chen, S., & Eldar, Y. C. (2023). Fedfm: Anchor-based feature matching for data heterogeneity in federated learning. IEEE Transactions on Signal Processing. [2] Collins, L., Hassani, H., Mokhtari, A., & Shakkottai, S. (2021). Exploiting shared representations for personalized federated learning. In International conference on machine learning (pp. 2089-2099). PMLR. [3] Xu, J., Tong, X., & Huang, S. L. Personalized Federated Learning with Feature Alignment and Classifier Collaboration. In The Eleventh International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, I will stick my score with a higher confidence.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to thank all reviewers for providing constructive comments that helped us to improve our paper. We are encouraged that reviews think our paper: - "The idea of letting the global/local models evolve over time is interesting and relevant" (Reviewer U5Wz), - "A relatively new and important problem of federated learning under concept drift is addressed" and "Contributions of this work are in the framework perspective and the module/strategy design perspective." (Reviewer HDwb), - "The paper tackles an important and relatively unexplored problem in federated learning" (Reviewer erp1) and "This work addresses a critical gap in federated learning" (Reviewer uegQ), - "The experiments are extensive" (Reviewer EQ7w) and "the results show significant performance improvements over existing methods in various concept drift scenarios" (Reviewer erp1) We have been working diligently on improving the paper on several fronts, addressing your critique. Below, we provide a PDF containing more helpful **pipeline of our FedCCFA** (as suggested by Reviewer HDwb) and **visualization of distance matrix after concept drift** (as suggested by Reviewer erp1). Besides, according to the FAQ of NeurIPS 2024, it is not allowed to add any link in any part of the rebuttal. We will provide our code once this paper is accepted. Please see our reviewer-specific feedback for more information. Pdf: /pdf/c0203e45b4cf560b0ec451f8fb952b17f65e53fe.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper introduces a solutions based on clustering to group classifiers in the federated learning setting. Such an approach is meant to deal with concept drift in data so as to track and adapt the evolutions of the classifiers of the clients in federated learning. On of the core points is to align features and this is carried out by ether adaptive feature alignment procedure suggested in the paper. Strengths: - The idea of letting the global/local models evolve over time is interesting and relevant - The clustering is based on the classifier space (the feature extraction phase is fixed) - Results in the concept drift are promising Weaknesses: Some technical points of the paper are not adequately addressed such as: - the partitioning of feature extractor + classifier is relevant and requires to share a common feature extractor. It is not clear to this reviewer whether the feature extractor is fixed or adapted over time - clustering of models requires to assume that the distribution of the model parameters shares some "locality" (e.g., this holds for linear models but not in general for NNs). - performance in stationary-conditions are less relevant. This is a crucial point (e.g., not suffered by other solutions) that requires to be further investigated Technical Quality: 3 Clarity: 2 Questions for Authors: - What's the need to partition the models into feature extractor and classifier? - Which is the assumption guaranteeing the clustering of classifier parameters in the parameter space? - What's the reason of the reduction in accuracy in no-change condition (i.e., Table 1) and how to mitigate this point? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: see previous box Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and kind words to our work. Below we address specific questions: > W1: the partitioning of feature extractor + classifier is relevant and requires to share a common feature extractor. It is not clear to this reviewer whether the feature extractor is fixed or adapted over time. A1: We will clarify this part of our manuscript. As we mentioned in Section 4.3 (FedCCFA), the feature extractor is trained locally by each selected client (line 9 in Algorithm 1) and the server aggregates these clients' local extractors (line 15 in Algorithm 1). The aggregated extractor is broadcasted to the clients at the next round (line 4 in Algorithm 1). The training and aggregation of feature extractors can further enhance generalization performance. In particular, to mitigate the impact of concept drift on the model training, local classifier training (line 8) is conducted before local extractor training, which can effectively adapt to real drift (i.e., the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ changes). > W2: clustering of models requires to assume that the distribution of the model parameters shares some "locality" (e.g., this holds for linear models but not in general for NNs). A2: We believe there may have been a misunderstanding regarding our clustering method. Our clustering method is also suitable for NNs. Specifically, in our FedCCFA framework and other decoupled methods (e.g., FedBABU [1] and FedPAC [2]), the classifier is the last fully-connected layer (i.e., a simple linear classifier) and the feature extractor is composed of the remaining layers. The extractor is shared across all clients and maps the input to a low-dimensional feature vector. Once concept drift occurs (i.e., the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ changes), different linear classifiers will be learned and the parameters of these classifiers reflect corresponding $P(\mathcal{Y}|\mathcal{X})$. Finally, the server separates these classifiers for each class (referred to as class classifier) and measures the class-level distances between each client's class classifier, which can group the clients according to their $P(\mathcal{Y}|\mathcal{X})$. The class classifier is still a linear model that outputs the logit for the specific class. Therefore, our method is also suitable for NNs, as long as the last fully-connected layer is used as the local classifier. Our experiments conducted on two CNNs also demonstrate this feasibility. To avoid further confusion, we will clarify this section of our manuscript. > W3: performance in stationary-conditions are less relevant. This is a crucial point (e.g., not suffered by other solutions) that requires to be further investigated. A3: The lower accuracy under no drift setting is attributed to decoupled training (i.e., training the classifier first and then training the feature extractor). Specifically, for decoupled methods (e.g., FedPAC [2], FedRep [3] and our FedCCFA), training the classifier first might cause it to fit the initial features, resulting in gradient attenuation during backpropagation. This will restrict the subsequent extractor's training process and prevent it from optimally learning features. In contrast, when training the entire model together, all layers can simultaneously adapt to the training data, allowing them to synergize effectively. However, in the context of concept drift, decoupled training is crucial because the classifier can effectively adapt to new conditional distribution $P(\mathcal{Y}|\mathcal{X})$ and small gradients will be back-propagated to the extractor. This can stabilize the training of the feature extractor. Besides, the results in Table 1 show that our method still outperforms the majority of baseline methods. > Q1: What's the need to partition the models into feature extractor and classifier? A4: In this paper, we focus on real drift, where the conditional distribution $P(\mathcal{Y}|\mathcal{X})$ changes. Since the marginal distribution $P(\mathcal{X})$ is invariant, the representation learning should not be affected by this drift. That is, the feature extractor can be shared across clients to enhance generalization performance. The key to adapt to this drift is the classifier, as it map the representation to the label space (i.e., learning the distribution $P(\mathcal{Y}|\mathcal{X})$). Therefore, decoupling the model into a feature extractor and a classifier can effectively adapt to concept drift while enhancing the generalization performance. The superiority of shared feature extractor can be demonstrated by the better performance than other clustered FL methods under various drift settings. > Q2: Which is the assumption guaranteeing the clustering of classifier parameters in the parameter space? A5: Since the classifier is the last fully-connected layer (i.e., a simple linear classifier), the clustering of its parameters makes sense. Please refer to A2 for more details. > Q3: What's the reason of the reduction in accuracy in no-change condition (i.e., Table 1) and how to mitigate this point? A6: Please refer to A3 for the reason of this reduction. To mitigate this point, it can be useful to fine-tune the entire model after training the extractor. **References:** [1] Oh, J., Kim, S., & Yun, S. Y. (2022). FedBABU: Towards enhanced representation for federated image classification. International Conference on Learning Representations. [2] Xu, J., Tong, X., & Huang, S. L. (2023). Personalized federated learning with feature alignment and classifier collaboration. International Conference on Learning Representations. [3] Collins, L., Hassani, H., Mokhtari, A., & Shakkottai, S. (2021). Exploiting shared representations for personalized federated learning. In International conference on machine learning (pp. 2089-2099). PMLR.
null
null
null
null
null
null
CogVLM: Visual Expert for Pretrained Language Models
Accept (poster)
Summary: This paper introduce CogVLM, a powerful open-source visual language foundation model. CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. The contributions are summarized: 1. introduce the CogVLM model, which deeply integrates visual and linguistic features while retaining the full capabilities of a pretrained large language model. CogVLM-17B, trained from Vicuna-7B, achieves state-of-the-art across 17 classic cross-modal benchmarks. 2. validated the effectiveness of our proposed visual expert module and the importance of deep fusion. 3. the weights of CogVLM and the dataset used in the SFT phase available to the public. Strengths: 1. The paper's results have achieved state-of-the-art (SOTA) performance, demonstrating excellent capabilities across multiple benchmarks. 2. The paper is well-organized and the expression is clear. 3. The starting point of the paper: addressing the issue of performance degradation in LLM due to parameter updates during multimodal training, is very meaningful and valuable. Weaknesses: 1. A new set of QKV and FFN would nearly double the number of model parameters. Why not consider using LoRA (Low-Rank Adaptation), where text tokens use QKV/FFN and image tokens use QKV+LoRA/FFN+LoRA? Additional experiments are needed to support this. 2. There is a lack of ablation studies to compare the impact of different RoPE (Relative Positional Encoding) strategies on visual tokens on the results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why train a separate grounding model? Would it degrade the chat model if the chat model also had grounding capabilities? 2. Why not try tuning the visual encoder? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your thorough review and insightful feedback. Your comments are invaluable in helping us improve our work. We address each of your points below: ## 1. Parameter efficiency and LoRA consideration Thank you for this good question. The Visual Expert indeed introduces additional parameters, increasing static GPU memory usage during training and inference. This represents a performance-cost trade-off, enabling CogVLM to outperform models like LLaVA-Next, which use much larger language models. Similar approaches have been employed in NLP models, such as DeepSeek-v2 [1], which has 236B total parameters but only 21B activated parameters. We explored the LoRA expert method with a LoRA rank of 128. However, we found its expressiveness limited: - It required 3.7 times more steps to reach the same loss level as our current method. - The per-step computation time was nearly identical to our current approach. These results indicate that while LoRA can reduce parameter count, it may come at the cost of training efficiency. In our experiments with language models up to 32B parameters, the additional memory overhead remained acceptable. For future extensions to larger language models, we will continue to explore parameter reduction techniques, including more efficient implementations of LoRA. ## 2. Ablation studies on RoPE strategies for visual tokens We appreciate you highlighting this important point. Our positional encoding scheme addresses the "remote attenuation effect" in RoPE, where attention weights decrease with increasing token distance. This prevents the query from overfocusing on nearby image tokens. Comparative experiments demonstrate the benefits: - At 224x224 resolution, our method and the original achieve the same pre-training loss. - At 490x490 resolution, our method achieves 5% lower loss. - On the DocVQA task (1120x1120 resolution), our method improves accuracy from 47.7% to 49.1%. The concurrent work Mousi [2] observed a similar phenomenon, further validating our approach. We will include these results in the revised paper to provide a clearer comparison of different RoPE strategies. ## 3. Separate grounding model Training a unified model had minimal impact on overall performance. Our experiments show that simultaneously training on both grounding and chat data led to: - A 1.5 point decrease in the Nocaps dataset score (from 128.3 to 126.8) - A 0.8 point increase in the VQAv2 task score (from 82.3 to 83.1) The main reason for using two models is to address ambiguous prompts where it's unclear whether coordinate outputs are required. For example, "Where is the dog in the image?" With two models, we can provide clearer instructions for grounding tasks without affecting the chat model's performance. ## 4. Visual encoder tuning We apologize for any confusion. We did indeed train the ViT parameters, as mentioned in line 152 of the original paper: "we activated the visual encoder's parameters and adjusted its learning rate to be one-tenth of that used for the remaining training parameters." [1] DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model [2] MouSi: Poly-Visual-Expert Vision-Language Models --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. My concerns have been addressed, and I would like to raise the score to 4 points
Summary: The paper introduces CogVLM, an new open-source visual language foundation model. Contrary to the popular method of adapting LLMs by fine-tuning their original weights, CogVLM introduces new weights specifically for processing the visual tokens. Concretely, CogVLM copies all weights of the LLM to form visual expert modules at every attention and FFN layer. During training only the visual expert weights are updated such that the LLM retains its language modeling capabilities when no image input is given. CogVLM is evaluated extensively on image captioning, VQA, LVLM and Referring Expression Comprehension benchmarks. Strengths: 1. The design choice to add additional parameters for the vision tokens is reasonable and implemented well such that it defaults to the original model without image input. 2. The experimental evaluation is extensive. CogVLM achieves convincing results across the board. 3. Ablations shed light on important questions such as the importance of the visual expert parameters and the causal attention mask. 4. As an open-source model, CogVLM can have a high impact on the research community. Weaknesses: 1. The prompting differences between the tasks is not clear. For instance, how can a user make sure that CogVLM returns bounding box information or prevent CogVLM to output bounding boxes? 2. While the results of CogVLM are impressive, it is not a parameter efficient method. Trying to scale CogVLM to LLMs of better language modeling capacity will double the parameters which is not sustainable for big LLMs. 3. As VLMs become more powerful over time, there will be higher expectations of their capabilities. One aspect that has not been mentioned at all is whether CogVLM can handle multi-image inputs or multi-turn conversations with interleaved image-text context. Based on the examples given and the training data, the paper currently suggests this is a model limitation. 4. Limitations are not discussed at all. The checklist states that the paper has no limitations without explanation, which is a superficial statement and hardly ever true. The previous points are likely limitations. 5. Similarly, the authors do not provide a discussion on the broader impact of the paper. Since CogVLM is an open model it has the potential to have a high impact, both positive and negative. Since CogVLM puts a particular effort into retaining the original LLMs performance, it also means that it carries over all biases and limitation from the base LLM. It is important that papers of generative models reflect on the societal impact. Technical Quality: 3 Clarity: 3 Questions for Authors: I would be great if the authors could clarify my concerns from the weaknesses section. Apart from that, here is one more minor suggestion. - Figure 1 shows interesting qualitative example. I suggest to reference the figure in the main text and extend the caption to explain the displayed capabilities. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and broader impact are not discussed although they certainly exist. The authors should add a discussion on both (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your thorough review and insightful feedback. Your comments have been invaluable in helping us improve our work. We address each of your points below: ## 1. Prompting differences between tasks Thank you for highlighting this important aspect. In our approach, we use two separate models for chat and grounding tasks. This design choice is based on the following considerations: - Using a single model wouldn't necessarily decrease performance, but some ambiguous prompts can make it difficult for the model to determine whether coordinate outputs are required. For example, "Where is the dog in the image?" - To avoid confusion, we believe it's necessary to use very clear prompts for grounding-type questions during both training and inference. For instance, "Specify the exact coordinates of the dog in the image" or adding task-specific prefixes like <grounding>. We will clarify this in the revised paper to ensure users understand how to effectively prompt CogVLM for different tasks. ## 2. Parameter efficiency We acknowledge that the Visual Expert introduces additional parameters, increasing static GPU memory usage. This represents a trade-off between performance and computational cost. Notably, CogVLM outperforms models like LLaVA-Next, which use much larger language models, demonstrating the efficiency of our approach. Similar parameter-heavy approaches have been employed in NLP models, such as DeepSeek-v2 (236B total parameters, 21B activated). We explored the LoRA expert method as suggested by Reviewer 7nCy, using a LoRA rank of 128. However, we found its expressiveness limited: - It required 3.7 times more steps to reach the same loss level as our current method. - The per-step computation time was nearly identical to our current approach. These results indicate that while LoRA can reduce parameter count, it may come at the cost of training efficiency. In our experiments with language models up to 32B parameters, the additional memory overhead remained acceptable. For future extensions to larger language models, we will continue to explore parameter reduction techniques. ## 3. Multi-image inputs We appreciate you bringing this to our attention. Our latest version of the model has been enhanced using MMC4 and OBELICS datasets to improve multi-image capabilities. We will update the results in the subsequent version of the paper to reflect these advancements. ## 4. Limitations discussion We sincerely apologize for the oversight in not discussing limitations. We will add a dedicated section on limitations in the revised paper, addressing the points raised by reviewers, including: - The impact of Visual Expert modules on parameter count - Support for multi-image inputs - Potential biases inherited from the base LLM ## 5. Broader impact We agree that discussing the broader impact is crucial. We will add this point in the limitations section, addressing the negative potential impacts. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have a couple of additional comments in the following. > Prompting differences between tasks I appreciate the clarification. While training separate models is a valid solution, it is less flexible at inference time and a bit wasteful in terms of resource usage (e.g. for training). > Parameter efficiency The LoRA experiments are interesting. I would have been interesting to see a proper quantitative evaluation if training such a model has already been completed. While the remaining questions have been answered, they mainly consist of promises without going into more details how the updates will be realized in the paper. Hence, I am keeping my original score.
Summary: This paper aims to add multimodal capability while maintaining the language capability of LLM. The authors thus propose a feature fusion strategy, without sacrificing any performance on NLP tasks. The experimental results on multiple datasets are impressive, which demonstrate the validity of the proposed method to some extent. Strengths: 1. The motivation of this paper sounds reasonable. Most of current VLMs do not take into account the degradation of LLM when training multimodal data, which is a good research point. 2. The method looks interesting. A simple feature fusion strategy seems to tackle the above problem. 3. The writing of this paper is good, and the structure is easy to follow. Weaknesses: The experiment still has room for improvement: (1) The results on the representative VLM benchmark MME are not given, which is widely used by current methods, such as LLaVA, InternVL, and Qwen-VL. It is suggested to add the results of MME in Table 2. (2) In order to maintain the capability of LLM, we can also add text data while using multi-modal data, is there a corresponding comparison? Some common text data might also serve this purpose. (3) Alternatively, we could freeze the parameters of the LLM but train the visual encoder and the connector, which might also do the trick. (4) It would be good to see CogVLM's ablility on the representative multimodal video benchmark Video-MME [1]. The authors could input multiple video frames, i.e., generalize to multiple images. GPT-4V, InternVL-Chat, Qwen-VL-Max, and Qwen-VL-Chat all have good generalization on Video-MME. It is suggested to add the results of CogVLM in this paper. [1] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis. Given the limited time available for rebuttal, the authors could incorporate the above experimental results and discussions into the final accepted version or the next version. Technical Quality: 4 Clarity: 4 Questions for Authors: None Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your thorough review and insightful feedback. Your comments are invaluable in helping us improve our work. We are pleased to address each of your points below: ## 1. MME Benchmark Results Thank you for suggesting the inclusion of MME benchmark results. We have evaluated CogVLM using LLaMA-3 as the base model on the MME benchmark. The results are as follows: | Model | MME | OCR | artwork | celebrity | color | count | existence | landmark | position | posters | scene | perception | |---|-----|-----|---------|-----------|-------|-------|-----------|----------|----------|---------|-------|------------| |CogVLM-LLaMA3-32B| 2043.54 | 132.50 | 150.75 | 172.06 | 168.33 | 153.33 | 195.00 | 174.50 | 103.33 | 147.62 | 150.75 | 1548.18 | |CogVLM-GLM3-32B| 2211.76 | 155.00 | 149.75 | 155.29 | 165.00 | 175.00 | 200.00 | 182.75 | 131.67 | 185.37 | 155.50 | 1655.33 | Model | code_reasoning | commonsense_reasoning | numerical_calculation | text_translation | reasoning | |-- |--------|-----|------|-----|----| |CogVLM-LLaMA3-32B |70.00 | 132.86 | 107.50 | 185.00 | 495.36 | |CogVLM-GLM3-32B| 120.00 | 151.43 | 92.50 | 192.50 | 556.43 | We will include these results in the revised version of our paper to provide a more comprehensive comparison with other state-of-the-art models. ## 2. Maintaining LLM Capability with Text Data This is an excellent point. Our method primarily focuses on improving the model structure, while approaches like VILA and InternLM-XComposer2 make improvements on the data side. These methods are not mutually exclusive. In fact, our latest experiments show that the best multimodal fusion is achieved by: 1. Using Visual Expert modules 2. Unfreezing LLM parameters 3. Simultaneously training on multimodal and pure text data We will include a discussion on this synergistic approach in the next version of our paper, demonstrating how structural and data-based improvements can be combined for optimal performance. ## 3. Freezing LLM Parameters You're correct that this approach has been explored, notably in the BLIP-2 series of models. However, as seen in Table 2 of our paper, this method often underperforms on many benchmarks. Our experiments also indicate that freezing language model parameters can lead to issues with instruction following and multimodal fusion. We believe that fine-tuning the language model parameters is crucial for optimal performance in multimodal tasks, which is why our approach allows for updating these parameters. ## 4. Video-MME Benchmark We agree that extending image model capabilities to video is an interesting and valuable area of exploration. We are currently conducting research in this direction and plan to evaluate our model on the Video-MME benchmark in the near future. --- Rebuttal Comment 1.1: Title: Correction and Apology Comment: We sincerely apologize for an error in our initial rebuttal. In our discussion of the LLaMA model used for the MME benchmark results, we incorrectly stated that we used LLaMA-3 32B. This was a mistake. The correct model used was LLaMA-3 8B. Thank you for your understanding, and we apologize again for this oversight.
Summary: This paper proposed a large vision language model, CogVLM, which shows capabilities in various benchmarks. In contrast to popular solutions that fuse the vision and language token in LLM with shared trainable parameters, CogVLM bridges the gap between the frozen pre-trained language models and image encoders by trainable visual expert modules in the attention and FFN layers. In this way, CogVLM excels in maintaining the language abilities of the original LLM and achieving good performance on various vision-language benchmarks, including captioning, VQA, and visual grounding. Strengths: 1. This paper is well-written and easy to follow. 2. The idea of Visual Expert is simple but effective. It is impressive that newly introduced Visual Expert Modules with large parameters can be well-optimized and perform well. 3. The proposed CogVLM is extensively evaluated using various benchmarks. It can be widely adopted for captioning, VQA, and Visual Grounding. Weaknesses: 1. The visual expert will introduce a large number of parameters. As a result, the CogVLM with a 7B LLM backend will finally have 17B parameters. Although this won't lead to high computational costs in inference, the significantly increased parameters will require much more GPU memory. It would be helpful if the author could discuss whether there are alternative solutions to save memories. 2. CogVLM adopts Visual Expert Modules to retain LLM abilities via freezing it. However, how about involving pure language data in the SFT stage, as in [1]? Whether the LLM ability can be retained this way needs to be discussed. 3. CogVLM allows all visual tokens to share a single position ID in the RoPE. An ablation study should be reported. [1] InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are not discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your thorough review and constructive feedback on our paper. Your insights are valuable for improving our work. We address each of your points below: ## 1. Increased number of parameters and memory usage We acknowledge that the Visual Expert introduces additional parameters, increasing static GPU memory usage during training and inference. This represents a trade-off between performance and computational cost. Notably, CogVLM outperforms models like LLaVA-Next, which use much larger language models, demonstrating the efficiency of our approach. Similar parameter-heavy approaches have been employed in NLP models, such as DeepSeek-v2[1] (236B total parameters, 21B activated). We explored the LoRA expert method as suggested by Reviewer 7nCy, using a LoRA rank of 128. However, we found its expressiveness limited: - It required 3.7 times more steps to reach the same loss level as our current method. - The per-step computation time was nearly identical to our current approach. These results indicate that while LoRA expert can reduce parameter count, it may come at the cost of training efficiency. In our experiments with language models up to 32B parameters, the additional memory overhead remained acceptable. For future extensions to larger language models, we will further investigate parameter reduction techniques. ## 2. Retaining LLM abilities We appreciate this excellent question. Our approach improves the model structure, while InternLM-XComposer2 focuses on data-side enhancements. These methods are not mutually exclusive. In fact, our latest experiments show that the best multimodal fusion is achieved by using Visual Expert modules, unfreezing LLM parameters, and simultaneously training on multimodal and pure text data. We will include a discussion on this in the revised paper. ## 3. Shared position ID for visual tokens in RoPE Thank you for raising this important point. Our positional encoding scheme addresses the "remote attenuation effect" in RoPE, where attention weights decrease with increasing token distance. This prevents the query from overfocusing on nearby image tokens. Comparative experiments demonstrate the benefits: - At 224x224 resolution, our method and the original achieve the same pre-training loss. - At 490x490 resolution, our method achieves 5% lower loss. - On the DocVQA task (1120x1120 resolution), our method improves accuracy from 47.7% to 49.1%. The concurrent work Mousi[2] observed a similar phenomenon, further validating our approach. [1] DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model [2] MouSi: Poly-Visual-Expert Vision-Language Models --- Rebuttal 2: Comment: Thanks for your feedback. Most of my concerns are solved. I will keep my rating.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Understanding the Transferability of Representations via Task-Relatedness
Accept (poster)
Summary: This paper considers transfer learning in a general cross-domain, cross-task setting. It introduces the concept of "task-relatedness" into measuring whether transfer learning can be successful. Specifically, task-relatedness consists of (1) a reference loss term for measuring the difference in class prior distribution between the target and source tasks; (2) a label mismatch term for measuring the conditional entropy between the label distributions; and (3) a distribution mismatch term for measuring the difference in feature distributions. To make computing task-relatedness practical, the paper proposes several approximation schemes. It shows the metric is highly correlated with model transferrability in practice and can be used for selecting pretrained models. Strengths: 1. Practical significance: The topic studied is important to the field of machine learning, as fine-tuning large-scale pretrained models gradually becomes the to-go option for model development. The idea of decomposing transferability into prior, feature, and label distribution differences is clean and interesting. 2. Complete pipeline: the paper not only proposes task-relatedness as a transferability metric but also shows how it can be transformed into a training objective for effective transfer learning. This makes the paper more comprehensive and the proposed method more valuable. 3. Presentation is clear and code for reproducing the experiments is provided Weaknesses: 1. Limited novelty: the idea of using distribution distances to measure transferability is not new. As the related work section has pointed out, using the Wasserstein distance to estimate domain difference has been studied by OTDD [1]. Works like OTCE [2] also uses the proposed metric to do pretrained model selection. One argument the paper makes is that the proposed task-relatedness works without requiring target labels. While this does add some practical significance, the proposed method is a simple pseudo-labeling scheme without theoretical justification. 2. Limited evaluation: the baselines considered in the experiments are not comprehensive, e.g., OTCE is not evaluated, and other recent OT-based works like [3] are not discussed. There's also no comparison for different pseudo-labeling schemes for the missing-label setting. Besides, there's no ablation study showing the contribution of each of the three terms proposed for computing task-relatedness. [1] Geometric Dataset Distances via Optimal Transport. David Alvarez-Melis. Nicolò Fusi. [2] OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations. Yang Tan, Yang Li, Shao-Lun Huang. [3] Wasserstein Task Embedding for Measuring Task Similarities. Xinran Liu, Yikun Bai, Yuzhe Lu, Andrea Soltoggio, Soheil Kolouri. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Class-prior transformation assumes the reference task has more classes than the target task. Is this too strong an assumption? Does the proposed method applies to the settings where the target task has more labels? 2. In Figure 3, the rankings of transferability for different target tasks are the same across all models. This is perhaps because the same reference dataset is used, and the experiments do not show the effect of varying model backbones. It would be better if the paper can evaluate multiple reference datasets on multiple models. (Figure 4 does look at different reference datasets but the model is fixed here.) 3. Does the proposed metric applyy to non-classification tasks like regression or auto-regressive prediction? 4. What's the max cross-domain distance the theory holds? I’d imagine that when the domain gap is large, the theory breaks. Have you studied that? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are delighted that you found our analysis clear and method practically valuable. We respond to your concerns below: > 1. Limited novelty: (A) the idea ... is not new. (B) the proposed method ... justification. A: While we agree with the reviewer that distribution distance is effective at gauging transferability, prior works such as OTDD and OTCE offer limited theoretical justification to explain a model’s transferability. As mentioned by the reviewer in their paper’s summary, our analysis shows that there are two additional terms (namely, reweighted reference loss and conditional entropy of label sets) along with distribution distance to provably explain transferability. Moreover, unlike OTDD and OTCE which rely on the distance between the distributions of the reference and the target tasks, in the input and representation space, respectively, our analysis shows that the distance between the **transformed** distribution of the reference task in the representation space (i.e. P_{R’’’} obtained after using the transformation A) and the distribution of the target task is what explains transferability provably (Theorem 3). B: We agree with the reviewer that task-relatedness can estimate transferability without requiring target labels easily and we emphasize that this ability is unique to our approach. This is because, to the best of our knowledge, no other transferability estimation method can produce pseudo-labels of the target data in the label space of the **target** task (Methods such as LEEP [34] rely on pseudo-labels of the target data in the label space of the **source** task). Using these labels for the target task, we can compute task-relatedness as per Def. 2 and still estimate the transferability of the models (Tabel 2). This makes task-relatedness the only method that can gauge the transferability of models without access to target labels. Thus, compared to prior works, our paper advances both our theoretical and practical understanding of transfer learning in a meaningful way. > 2. Limited evaluation: (A) OTCE is ... not discussed. (B) no comparison ... pseudo-labeling schemes (C) no ablation study ... task-relatedness. A: Please see our joint response for comparison with OTCE. We thank the reviewer for pointing us to [a] and we briefly compare our approach to them here. We will include this discussion in the main paper. [a] proposes a model-agnostic approach that relies on optimal transport to compute the distance between the tasks similar to OTDD [3]. However, our work specifically focuses on understanding the transferability of different models both theoretically and practically and hence proposes a model-dependent metric for it. B: As mentioned above, no previous score-based transferability estimation methods can work without access to labels of the target data. Methods such as LEEP [34], use pseudo-labels of the target data from the source classifier but still require labels of the target data to compute their transferability score. Thus, to the best of our knowledge, task-relatedness is the only approach that can work when no target labels are available. Due to this, we show its effectiveness by comparing the correlation of task-relatedness with and without target labels to actual transferability in Table 2. Our results show that even without target labels, task-relatedness achieves a good correlation with transferability when reference and target tasks are related. C: In Appendix C.1, we study the effect of different transformations on task-relatedness. We present three settings, the first where no transformations are learned, the second where feature transformation is learned, and the third where all the transformations are learned. Our results show that task-relatedness achieves the smallest gap to transferability when all transformations are learned, closely followed by learning only the feature transformation, and significantly better than not learning any transformation. These three settings capture the effect of the three different transformations proposed in our transformation model. We also break down task-relatedness into its three corresponding terms in Figs. 3 or 4b to highlight the contribution of each term. > 1. Class-prior ... assumption? Please see our joint response. > 2. In Figure 3, ... fixed here.) The full results of Fig 3 presented in Fig 9 (in the Appendix) already include models trained with different model backbones such as ResNet-18, ResNet-50, ResNet-101, and ResNet-152 trained with a variety of training algorithms. Along with these results, Fig 4(a) shows the effect of training the same model backbone on different reference datasets and evaluates transferability to a variety of target tasks. Fig 4(b), evaluates transferability to various target tasks using the original pre-trained CLIP encoder and CLIP encoder fine-tuned end-to-end on different reference tasks. Thus, our experiments already evaluate the effect of different model backbones and reference datasets on transferability and task-relatedness. > 3. Application ... auto-regressive prediction? Please see our joint response. > 4. What's the ... holds? Similar to any other upper bounds that analyze the problem of distribution shift (e.g., [6]), a large upper bound implies that theory might not be able to explain the results in practice. This is true for task-relatedness as well since a large upper bound does not necessarily guarantee poor transferability. Using various datasets/tasks (in Fig 3 and 9) we show that task-relatedness estimates transferability closely. Moreover, under the conditions mentioned in Lines 221-225, our bound is indeed tight. **New References** [a]: Wasserstein Task Embedding for Measuring Task Similarities --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. I'll raise my score to a 5. --- Reply to Comment 1.1.1: Title: Response by Authors Comment: We appreciate the reviewer's response to our rebuttal. We would be happy to address any other concerns the reviewer has.
Summary: This paper proposes an analysis that analyzes the transferability of the representations of pre-trained models to downstream tasks in terms of their relatedness to a given reference task. It aims to understand when the knowledge of these pre-trained models can be transferred to obtain high-performing models on downstream target tasks. Their analysis leads to an upper bound on transferability in terms of task-relatedness, quantified using the difference between the class priors, label sets, and features of the two tasks. The experiments results demonstrate the effectiveness of the proposed method. Strengths: (i) This work analyzes transferability for classification tasks and provides the first upper bound on transferability in terms of task-relatedness in a cross-domain cross-task setting. (ii) This work proposes an optimization problem to compute task-relatedness using a small amount of target labels and shows that it can even predict performance after end-to-end fine-tuning without requiring target labels. (iii) This work conduct multiple experiments to demonstrate the effectiveness of the proposed method. Weaknesses: (i) This work mentioned that it is the first to analyze transferability for classification tasks under cross-domain cross-task settings. However, there are multiple works [1-5] that have focused on the transferability and generalization of learned representations. Meanwhile, transferability is mainly evaluated by utilizing the performance differences between different domains and tasks with distribution shifts. This is also the case in previous works, but this is not the case for the introduction of motivation in this paper.\ [1] Understanding few-shot learning: Measuring task relatedness and adaptation difficulty via attributes.\ [2] Otce: A transferability metric for cross-domain cross-task representations.\ [3] On the theory of transfer learning: The importance of task diversity.\ [4] An information-theoretic approach to transferability in task transfer learning.\ [5] Task relatedness-based generalization bounds for meta learning. (ii) (Minor) Although this work focuses more on theoretical exploration, empirical verification on real-world datasets is also necessary, such as ablation studies on the three proposed terms, trade-off experiments (performance vs. efficiency vs. memory footprint) due to the introduction of additional computations, and comparisons with baselines on transferability. (iii) (Minor) The formatting and layout of some formulas and tables need to be adjusted to make the reading clearer and more intuitive while some equations with writing errors, such as Eq. 3, L527-528, the proof of Corollary 1. At the same time, due to the large number of formulas and physical quantities, it may be better to construct a table to explain the symbols and definitions. (iv) (Minor) The track chosen by the paper is not that matched to its content. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see Weaknesses. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We respond to your concerns below: > Compare with [1,2,3,4,5] We thank the reviewers for pointing us to additional related works. We briefly compare our approach to them here and will include the discussion in the main paper. [1,3,5]: study the problem of few-shot learning (FSL) where a model is trained on data from related training tasks and is adapted to an unseen task using only a few samples. Different from these works, we focus on the transfer learning (TL) setting where a pre-trained encoder trained on some pre-training dataset is adapted with enough samples/gradient steps to a downstream task. This downstream target task may or may not have any relation to the pre-training task unlike [1,3,5]. Concretely, [1] proposed a model-agnostic metric called Task Attribute Distance to gauge the success of learning in the FSL setting. Our work, on the other hand, defines task-relatedness based on the similarity of the representations of reference and target tasks in the representations of the pre-trained model (and is model dependent) rather than relying on the attribute information, which may not be available in the TL setting. [2] analyzes the sample complexity for learning a model shared across tasks and adapting it to a new target task and shows task diversity to be a crucial component for the success of FSL. Our work on the other hand does not assume access to shared tasks or restrict the number of samples required for fine-tuning on the target task. Moreover, their notion of task diversity requires access to a set of training tasks that may not be available in the TL setting, making our notion of task-relatedness more practical for TL. [3] proposes a notion of task-relatedness for the FSL setting, allowing to utilize all the data from available training tasks to help learn a model on a new task with a few gradient steps. This notion is model-agnostic and defined over the sample space ($X \times Y$) unlike our measure which is defined in the representation space of the model whose transferability needs to be evaluated. Thus while task-relatedness is at the core of both TL and FSL, the works [1,3,5] proposed notions relevant to the FSL setting whereas our work proposed a notion relevant to the TL setting. Please see the joint response empirical comparison against [2,4]. > Although this work ... baselines on transferability. **Ablation studies:** In Appendix C.1, we show the effect of using different transformations on the value of task-relatedness computed by solving Eq. 3. We present the results in three settings, the first where none of the transformations are learned, the second where only the feature transformation (parameterized by A) is learned, and the third where all the transformations are learned. The experimental results show that task-relatedness achieves the smallest gap to transferability when all transformations are learned, closely followed by learning only the feature transformation, and significantly better than not learning any transformation. These three settings capture the effect of the three different transformations proposed in our transformation model. **Trade-off experiments:** In Appendix C.1.2, we evaluate the number of epochs required by Alg 1 to minimize the proposed upper bound (compute task-relatedness) for four target tasks using the ResNet-18 model and Imagenet as the reference task. Our results in Fig 8 show that after approximately 600 epochs (approximately 2 minutes of wall clock time on our hardware) Alg. 1 learns the transformations that lead to a good estimate of transferability. For the problem of end-to-end transferability estimation (Sec 4.3) for different models, we find that the computation of task-relatedness is significantly faster than the computation of actual end-to-end fine-tuning. Specifically, the computation of task-relatedness requires roughly only 3-4 minutes for computation, compared to end-to-end fine-tuning which can require computation of several hours. E.g., You et al., 2021 [54] show that end-to-end fine-tuning of a ResNet-50 model on the Aircraft dataset, with hyperparameter search, requires about a day’s worth of computation to achieve the best accuracy of 86.6% (see Sec 5.5 and Table 5 of You et al., 2021). In comparison, task-relatedness can be estimated very efficiently. Compared to other SbTE approaches such as SFDA, our approach increases the computational time by ~30 seconds but gets consistently better results as shown in Table 1 and Fig 5 of the paper and the pdf in the joint response. **Comparisons with baselines on transferability:** In Fig 3 and 4(b), we compare the estimate of transferability computed via Alg. 1 to the ground truth value of transferability (denoted by the blue bar labeled target loss). Our results show that task-relatedness incurs a small error compared to actual transferability. In Table 1, we present the result of comparing the correlation of task-relatedness and accuracy after end-to-end fine-tuning and compare it to the correlation of five popular score-based transferability estimation methods. Along with these we also compare the effect of the number of target labels available on the correlation of each of these methods with the accuracy after end-to-end fine-tuning. Thus, we believe that we have already provided a detailed empirical evaluation of our methodology and compared our approach with popular transferability methods. However, if the reviewer would like to see any specific experiments to be added, we would be happy to provide them as well. > The formatting ... definitions. We thank the reviewer for this excellent suggestion. We will create a table to clarify all the symbols and their meaning. We will expand and adjust the tables and equations to be more readable in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. But my main concerns still exist, for example, the author mentioned that transfer learning is a setting uniquely considered in this article, but meta-learning can be considered a branch of transfer learning; the concept of task similarity is difficult to distinguish from the one in this article, and the descriptions like "can be used or not" and "may not" is vague. Therefore, the confusion in Weakness 1 is difficult to eliminate from the response. At the same time, by reading the responses of other reviewers, I found some issues that were previously overlooked. Therefore, after carefully checking all the responses, I tend to maintain my score. --- Rebuttal 2: Title: Response by Authors Comment: We thank the reviewer for their response to the rebuttal. While we agree that transfer learning is a broad research topic, our paper specifically works in the inductive transfer learning setting where the focus is to leverage an inductive bias (a pre-trained model) to improve performance on a target task. Due to this, our analysis specifically proposes a model-dependent way of measuring the relatedness between a reference and a target task. The works such as [1,3,5] provided by the reviewer deal with the problem of task transfer learning. Their goal is to identify the relationship between tasks, regardless of the model, to explain the transfer performance. E.g., the task attribute distance proposed by [1] is independent of the model being used and only depends on the tasks. (This terminology of inductive and task transfer learning is based on a previous work LogMe[54] which has the same setting as ours.). Thus, the setting of our work differs significantly from the setting in [1,3,5]. We thank the reviewer for bringing this up. We will add a discussion to the paper and clarify the type of transfer learning being studied in the paper along with the mention of the relevant papers from the area of task transfer learning. We would be happy to address any further concerns the reviewer may have.
Summary: This paper analyzes transfer learning from the perspective of task relatedness, a model for transforming from a reference task (pretraining task) to the target task is proposed, which consist of prior transform, label transform, and feature transform. The task relatedness is then measured as the distribution mismatch of the transformed distribution and the target distribution. Experiments have shown that the proposed task relatedness tightly upperbounds the transferrability to a range of architectures and datasets. Strengths: 1. The motivation and the derivation of the proposed task relatedness make senses to me. 2. The empirical results have shown the advantage of the proposed task relatedness. Weaknesses: 1. The datasets for evaluation and empirical studies seem to be small in scale, for example, MNIST and CIFAR which is small in the image resolution, or Aircraft and DTD which is small in the number of images. Technical Quality: 4 Clarity: 4 Questions for Authors: The appendix has answered most of my questions. My only question is that for algorithm 1, does this has guaranteed convergence? and practically how long does it take to reach convergence? Minor: A possible related work: Discriminability-Transferability Trade-Off: An Information-Theoretic Perspective, ECCV 2022. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The paper has discussed its limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are glad that you found our work intuitive. We respond to your concerns below: > The datasets for evaluation and empirical studies seem to be small in scale, for example, MNIST and CIFAR which is small in the image resolution, or Aircraft and DTD which is small in the number of images. The datasets used in our work are the ones used by prior works. This was done to make the comparison with previously proposed methods for transferability estimation easier. Moreover, most model backbones used in our work require data to have dimensions 224x224, so we resized images from all datasets including CIFAR to match those dimensions. This shows that our method handles data with a larger resolution. > My only question is that for algorithm 1, does this has guaranteed convergence? and practically how long does it take to reach convergence? Similar to other problems in deep learning this is a non-convex minimization problem where SGD converges to a local minima. Fig 8, in Appendix C.1.2 shows how Alg. 1 minimizes the proposed upper bound by learning different transformations of the reference task (ImageNet) into four target tasks (CIFAR-10, CIFAR-100, Pets, and Aircraft) with the ResNet-18 model. After approximately 600 epochs (approximately 2 minutes of wall clock time on our hardware) the Alg. 1 converges to local minima. To demonstrate that Alg. 1 converges to a reasonable solution, we consider transfer learning using a 20-class subset of Imagenet as the reference task and CIFAR-10 as the target task. Results in Fig. 6 (bottom left) and Fig. 7 (in the Appendix) provide clear evidence of Alg. 1 finding the transformations that minimize the upper bound. Specifically, without learning any transformations the upper bound is large (~ 3.22 in Fig. 6) and the reference and target classes do not match (leftmost plot in Fig. 7). However, learning all the transformations reduces the upper bound (3.06 in Fig. 6), suppresses the prior of 10 source classes to match the 10 classes in CIFAR-10 and reduces the Wasserstein distance between the transformed source and target classes (rightmost plot in Fig. 7) illustrating the working and convergence of the optimization problem to a good solution. > Compare with Discriminability-Transferability Trade-Off: An Information-Theoretic Perspective We thank the reviewers for pointing us to this work. We briefly compare our approach to it here and will include the discussion in the main paper. The work demonstrated a tradeoff between a model's discriminability and transferability properties as training progresses and proposed a learning framework to alleviate this tradeoff. Our work on the other hand focuses on analyzing the transferability in terms of relatedness of the representations of the reference and target tasks after training is complete.
Summary: The paper presents a way of assessing the transferability of representations from a pre-trained model to a target task by assessing the impact of the pre-trained model's representations on a reference task. By "transforming" the reference task into the target task, the paper produces a bound on the training loss of the target task Strengths: 1. The paper presents both theoretical and empirical analysis of a problem 2. Some of the provided experiments are reasonably comprehensive 3. The idea of tackling the transferability problem by introducing a reference task seems novel AFAIK Weaknesses: My main issues with this paper are as follows. 1. **Utility of the method** : I was initially excited by the following sentence in the introduction of the paper -- *"Moreover, unlike previous SbTE metrics, task-relatedness can be estimated even without labeled target data, making it suitable for unsupervised transferability estimation, highlighting the advantage of a reference task as used in our analysis"*. However, it seems that Algorithm 1 requires access to the target labels $\mathcal{Y}_{T}$. This calls into question this stated advantage over the SbTE methods. Specifically, SbTE methods like TransRate and LEEP do not require training on a reference task which not only comes with extra overhead but whose selection requires a-priori meta-intuition about which tasks are similar to the target task. Also, none of the empirical evaluations presented actually show that the proposed method is superior to SbTE approaches for choosing models form a zoo. Specifically, the paper presents hard to interpret correlation measures in Table 1 as proof of transferability -- when a superior test would be compare deltas in final training performance of the models the models that each method select from a zoo. 2. **Correctness / Impact of claims**: [a]. From line 238 - 241: *However, since finding data semantically related to the target task may not always be possible we choose a reference task with the same number of classes as the target and fix that matrix B to a random permutation of identity (making the label mismatch term zero) and D to the prior of the reference task, learning only the transformation A, in our experiments.*. This seems to undercut the utility of the analysis presented. Why does this work ? Why even discuss these terms if they are effectively fixed ? [b]. It feels like the insight of *Highly related reference–target task pairs, based on task-relatedness, achieve higher transferability coinciding with the semantic relatedness between tasks.* is just punting the problem of transferability estimation to task similarity estimation (and now calling on practitioner intuition to decide which tasks are highly related). Thus it does not seem that the authors are actually effectively solving the problem of transferability estimation -- they are just transforming it into a different problem. [c]. The authors claim "tight bounds / small gaps between task related and transferability" but this is based on (imo) just eyeballing the graphs in Figure 3 and 9. We are not provided with a rigorous way to interpret the gap. Eg -- a very simple way to artificially present a small gap is to just plot the the losses at random init of the classification head for the the reference and target tasks that have the same number of class labels. 3. **Strength of underlying technical assumptions**: [a]. the K_r > K_t seems like a strong assumption that limits the methods applicability. What if the most similar task to the target task has K_r < K_t. [b]. Also as mentioned above in 238 - 241 it seems the label and class-prior matching component of the algorithm effectively has no influence on the final algo results. Score updated after discussion Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Work focuses on classification based tasks -- could you discuss how this applies to generative tasks 2. Can you discuss how faithful the estimates / method is when K_r < K_t ? 3. The analysis is for training error -- how is transferability estimation affected if there is overfitting to the reference task ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A -- paper presents some limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We respond to your concerns below: > Utility: A) I was ... methods. B) Specifically, ... target task. A: While Alg. 1 requires labels of the target task, we discuss in Lines 329-347 how to use Alg. 1 for the case when target labels are unavailable. The main idea is to use the transformation model to obtain the pseudo-labels (Line 337) of the target data in the label space of the **target** task. (Note that this is different from the pseudo-labels of the target data in the label space of the **source** task as used by LEEP [34]). Using these pseudo-labels of the target data as a proxy for $Y_T$, we can now use Alg. 1. Since no other SbTE method (to the best of our knowledge) can work without target labels, we stated this in the introduction of the paper. We will update the input of Alg. 1 appropriately to clarify this. B: We emphasize that the reference task does not need to be similar to the target task to use our methodology for estimating transferability. Any benchmark/publicly available dataset can serve as the reference dataset and its use is not a bottleneck for our analysis/approach. Instead, it makes it possible to study transferability provably, enables us to estimate transferability without target labels, and provides insights into improving the transferability of a model to downstream tasks. > None of .. While the best way to evaluate transferability is to fine-tune a model end-to-end on the target task, it is computationally prohibitive. E.g., You et al., 2021 [54] show that end-to-end fine-tuning of a ResNet-50 model on the Aircraft dataset, with hyperparameter search, requires about a day’s worth of computation (Sec 5.5 and Table 5 of You et al., 2021). This is the motivation for most SbTE approaches that efficiently estimate transferability without end-to-end fine-tuning. Previous works [5, 34, 25, 54, 49] reported the Pearson correlation coefficient between the accuracy of models after end-to-end fine-tuning and the proposed scores. A high correlation of an SbTE score indicates that comparing the scores of various models and choosing the one with the highest score is most likely the best-performing model on a given target task. Thus, we report correlation coefficients. (Since task-relatedness is based on loss, a higher negative correlation is desirable for us.) > [a]. From line 238 - 241.. We ablate the use of different transformations and measure the effect of learning each of them in Appendix C.1. Our experiments show that minimizing the conditional entropy by learning transformation B prefers a sparse mapping between the classes of the reference and the target tasks. As a result, one class from the target is mapped uniquely to one class from the reference task, and the priors of extra reference classes are reduced to zero (This is illustrated in Fig. 7 of Appendix C.1.1). Based on this, we use a reference task with the same number of classes as the target task and fix the matrix B to be a random permutation of the identity matrix. Thus, while each term in the bound has its impact, fixing the matrix B and choosing a reference task with the same number of classes as the target, we simplified the optimization problem in Eq. 1 and made it efficient for practical usage. > [b]. It feels.. The main purpose of the experiments in Sec. 4.2 is to demonstrate the validity of our analysis and the fact when the target task is a transformation of the reference task, a model’s transferability to the target task can be provably explained based on that of the reference task. For e.g., for an encoder trained on MNIST, ground truth transferability to USPS is better than that to FMNIST (Fig. 4(a) right). This is also what is suggested by our task-relatedness analysis/metric using MNIST as the reference task (Fig. 4(a) left). Thus, as mentioned in Fig 1, given a pre-trained encoder and a reference task, we can provably explain the encoder’s transferability to those target tasks that are transformations of the reference task. For the practical purposes of estimating transferability on the other hand, any reference task can be used (i.e., any publicly available or benchmark data) to compute task-relatedness. As shown in Sec 4.3, the use of a reference task to estimate transferability leads to a more stable estimate of transferability regardless of the number of target samples used as well as enables estimating transferability without access to target labels. Thus, while being theoretically sound, task-relatedness provides practical advantages over previous transferability estimation methods. > [c] The authors.. A small gap in transferability implies that the difference between the left-hand side of the bound in Theorem 3 and the right-hand side computed by learning the transformations via Alg. 1 is small. While we agree that losses at random initialization for reference and target loss might show a small gap, it will not be very useful. Thus, we measure the left-hand side of the bound after training on the target task and then compare task-relatedness with this value. The target loss presented as the blue bars in Figs 3 and 9 show this value. > The $K_r > K_t$ ... when $K_r < K_t$ ? Please see our joint response. > classification based ... generative tasks Please see our joint response. > The analysis.. This seems to be a misunderstanding since our analysis is for expected loss. The classifiers $h_R$ and $h_T$ are learned on the training data and we report transferability and task-relatedness values on the test data from the two tasks similar to any ML pipeline. When there is overfitting on the reference task, we can expect the reweighted reference task loss (computed on test data) to be higher, which may lead to a larger gap between the actual and predicted transferability. We emphasize that this is not unique to our method as any analysis based on a source/reference task (e.g., [6,7]) will behave similarly. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your detailed response to my review. Please see my responses below > `A: While Alg. 1 requires labels of the target task, we discuss in Lines 329-347 how to use Alg.` Hmm -- this is very confusing / seems contradictory. First, note that Algo 1, uses a random permutation matrix as B. Next, in lines in Lines 329-347 requires you to use B to compute the pseudo-labels. But 156 - 166 requires using target labels to specify B. So are you saying that to compute pseudo labels you use a random B ? Please clarify if that is not the case, and if that is the case, what are the error bars on the results in Table 2 ? I would imagine some non-trivial sensitivity to the choice of B. > `B: .. We emphasize that the reference task does not need to be similar to the target task to use our methodology for estimating transferability. Any benchmark/publicly available dataset can serve as the reference dataset and its use is not a bottleneck for our analysis/approach.` This is also confusing to me -- and seems a bit "free-lunchy". Theorem 3 on line 213 clearly gives **an upper bound** that depends on the label and distribution. I would imagine that the choice of the reference tasks affects the looseness of this upper bound -- we can simply think of the reference task as a variable that we are minimizing the upper bound over to make it tighter. Thus, the quality of your transferability estimate does depend on the *appropriate* choice of reference task. Please clarify if I am mistaken. > `While the best way to evaluate transferability is to fine-tune a model end-to-end on the target task, it is computationally prohibitive.` I would like to respectfully push back on this. You could have set up much smaller scale experiments to validate that the transferability scores are actually good for a reasonably sized model selection problem. > `We ablate the use of different transformations and measure the effect of learning each of them in Appendix C.1. Our experiments show that minimizing the conditional entropy by learning transformation B prefers a sparse mapping between the classes of the reference and the target tasks. .... Based on this, we use a reference task with the same number of classes as the target task and fix the matrix B to be a random permutation of the identity matrix.` As you mention in Appendix C.1 this is done on a toy problem -- it's a big leap to assume that this is the right thing to do for a more realistic problem without a bit of preliminary / small scale experimentation. Next, even if there is a sparse 1-1 mapping between classes, it is very strange to me that you decided that the exact mapping is not important and rather a random one would suffice. Basically, I understand your restriction to the set of `permutation matrices` but not your assumption that any random one of these matrices would suffice --- Reply to Comment 1.1.1: Title: Invitation for further discussion Comment: Dear reviewer, We sincerely appreciate your time and effort in reviewing our manuscript. Through our rebuttal we clarified your concern around making Alg. 1 suitable for transferability estimation when target labels are unavailable. We also justified the reason for reporting the correlation coefficient for the problem of transferability estimation in line with all prior SbTE works [5, 34, 25, 54, 49]. Lastly, our latest response also provided clarification and more empirical evidence showing that fixing some of the transformations (such as fixing the matrix B to a random permutation) eases the combinatorial nature of the optimization problem in Eq. 3 without significantly affecting the value of the transferability estimate. As the discussion period is about to end, it would be appreciated if you could let us know of any other concerns we can address at this point. If you believe our responses sufficiently addressed your concerns we would humbly request you to re-evaluate our paper in the light of our discussion and update your rating of our work. Thank you immensely for your contributions and thoughtful consideration. Regards Authors --- Rebuttal 2: Title: Response by Authors Comment: We thank the reviewer for their response to the rebuttal. > Are ... of B. The reviewer’s intuition is correct that the pseudo labels are computed using the $B$ matrix provided at the initialization of Alg. 1 for the first time. But as Alg. 1 progresses the matrix $B$ will also be updated. As a result, the pseudo labels of the target data are also updated. This can be thought of as an additional step in Alg 1 as follows: Step 2a: If target labels ($y_T^j$) are not available then compute $y_T^j = arg max_{y \in Y_T} Bh_R(A^{-1}x_T)$ for all $j = 1, …, n_T$. For computing the results in Table 2, the matrix B is fixed to a permutation matrix (similar to our other experiments). Thus, there is no variability due to the matrix B. > I would ... mistaken. The reviewer's intuition is correct. The choice of reference task affects the upper bound as we demonstrated, in detail, in Sec. 4.2. Specifically, when the reference and target tasks are more related (in terms of our task-relatedness metric) the upper bound is smaller. However, finding the most related reference task for every target task may not be possible in practice. Since our analysis does not impose any restriction on the choice of the reference task to use for computing task relatedness, a practitioner can estimate the upper bound using a few benchmark/publicly available datasets and select the reference task as the one that minimizes the upper bound, similar to the approach suggested by the reviewer of treating the reference task as a variable. > I would like ... problem. This seems to be a misunderstanding. SbTE methods as per [54] produce a score for each model (ideally without end-to-end fine-tuning on the target) that correlates well with end-to-end accuracy. This allows selecting the top-performing model by simply evaluating these scores. Thus, all prior works report correlation coefficients similar to our results in Table 1. Moreover, scores produced by SbTE methods do **not** numerically approximate the accuracy after end-to-end fine-tuning. For example, the results in Fig. 4 of [54] show that the value of SbTE metrics (y-axis of the plot) such as LEEP [34], NCE [50], and LogMe [54] lies in the range [-0.8,-0.3], [-0.45, -0.25], and [0.935, 0.95] respectively for the Aircraft dataset whereas the end-to-end fine-tuning accuracy (x-axis of the plot) lies in the range ~[72.5, 87.5] for the various models. It would be great if the reviewer could clarify their question if we have misunderstood them. > It is very ... suffice. The experiment in Appendix C.1 is a small-scale experiment as it is in the same setting as in Sec 4.1 with the ResNet-18 model. It is intended to be an ablation study to understand how different transformations affect the upper bound. The label mismatch term of Theorem 3 is minimized with a sparse B matrix corresponding to a one-to-one mapping between the classes of the reference and target tasks. However, due to the combinatorial nature of the problem finding the exact permutation is infeasible. Regardless of that, the linear transformation $A$ can align the distributions of the representation of the target to any permutation of the reference classes quite accurately. This is primarily due to the high dimensionality of the representation space where the linear transformation is applied. As a result, the value of the bound differs only by little for the different permutations of the reference task’s classes. This is indeed a very interesting outcome that makes our bound usable without requiring the exact semantic matching between the classes of the reference and the target classes. Below, we present an empirical evaluation to show that the difference in the bound computed with true vs random permutation is small. We use MNIST as the reference task and USPS as the target task (and vice-versa). We compare our results in a setting where only $A$ is learned and $B$ is set to an identity matrix and when $B$ is set to a random permutation matrix. Note that the identity matrix corresponds to the correct mapping between the classes of MNIST and USPS tasks (both contain digits from 0 to 9). We find that the upper bound obtained when $B$ is fixed to identity is only marginally better than the case when $B$ is a random permutation. Specifically, the difference between the bound when $B$ is fixed to a random permutation and when $B$ is an identity matrix is 0.10 for the MNIST→USPS task and 0.17 for the USPS→MNIST task. The primary reason for the decrease in the upper bound comes from the reduced distribution mismatch term. While the upper bound improves slightly when the ideal matching between the labels is known, such a mapping may not be known when the labels of the tasks are not related, e.g., for FMNIST and MNIST. Thus, fixing $B$ to a random permutation matrix yields a reliable estimate of transferability in most cases. We would be happy to address any further concerns the reviewer may have.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback and insightful questions. We are encouraged that all the reviewers found the main contribution of the paper of rigorously analyzing transferability to target tasks in a cross-domain cross-task transfer learning setting using task-relatedness novel (3SCA), intuitive (E8p5), practically valuable (sD5Z); and our empirical evaluation comprehensive (3SCA, sD5Z). We are glad that reviewers E8p5 and dEjE appreciated the significance and contributions of our work and recommended acceptance. While 3SCA and sD5Z recommended reject and borderline reject, respectively, the weaknesses pointed out by them are primarily clarification questions about the paper’s setting and utility, which we have addressed comprehensively in this rebuttal. Please see our comments below for answers to each reviewer's specific questions. Along with these, we have included additional results comparing our method with OTCE and Hscore [a] for the end-to-end transferability estimation problem considered in Sec 4.3. Based on these answers and clarifications we have addressed the concerns of each of the reviewers. Hence, we hope reviewers will consider our answers, increase their ratings, and recommend acceptance. If additional questions arise during the discussion phase, we are just a post away and would be happy to address them. > **[dEJE, sD5Z]: Additional results for OTCE and Hscore** OTCE [49] and [a] propose score-based transferability metrics for the problem of end-to-end transferability estimation similar to those discussed in Sec. 4.3. Specifically, as discussed in Sec. 2, OTCE proposes a linear combination of negative conditional entropy between the label sets of the source and target tasks computed using OT-based coupling matrix and the Wasserstein distance between the representations of the two tasks. [a] on the other hand, solves the HGR maximum correlation problem to analyze transfer learning in the setting when the source and target domain have the same features, similar to [50]. They propose to use normalized Hscore as a way to estimate transferability. We have expanded our empirical section to include these baselines in Sec 4.3. We emphasize that both these methods are specifically proposed for score-based transferability estimation and do not provide a rigorous analysis of transferability, unlike our approach. The comparisons to these were omitted due to the requirement of auxiliary tasks required to obtain hyperparameters crucial for OTCE’s performance as mentioned by the authors and some considered baselines such as LEEP and LogMe had already shown improvements over Hscore. However, we have expanded our empirical results to include these methods. For OTCE, we follow the official code and compute the recommended OT-based NCE score and OTCE score ($\lambda_1=-0.0001$ and $\lambda_2=-1$) using 4000 randomly selected training samples from the two tasks. For the source task, we subsample data from the same number of classes as the target task. For the H-score, we use the official code to compute the metric. The results for these two new metrics are presented in the attached pdf corresponding to Table 1 of the paper which evaluates the Pearson correlation of the metrics to the end-to-end fine-tuning accuracy of five models and Fig 5 of the paper which evaluates the sensitivity of the metrics to different numbers of target samples. Consistent with the results in Table 1 and Fig 5 of the paper, task-relatedness achieves a high correlation to end-to-end fine-tuning accuracy better than most previous SbTE methods. This correlation also remains stable regardless of the number of target samples, unlike most other SbTE methods whose correlation degrades severely in scenarios with fewer samples from the target task. > **[3SCA, sD5Z]: Does the proposed method apply to the settings where the target task has more classes than the reference task?** Yes, since this assumption does not affect the analysis. We assumed it since we focus on explaining transferability with the performance/classifier of the reference task. Thus, scenarios where the reference task has fewer classes than the target would intuitively lead to a poor understanding of transferability. While task-relatedness can be computed in this setting, the presence of fewer classes in the reference task can increase the conditional entropy and the distribution mismatch terms. This is because the mass of the extra target classes will be split across multiple reference task classes leading to a higher distribution mismatch. The conditional entropy will be higher because a sparse mapping (such as one-to-one) between reference and target task classes cannot be obtained. Practically, since a practitioner can choose any reference task to evaluate task-relatedness, it's preferable to use one with more classes to get a better estimate of transferability. > **[3SCA, sD5Z]: Does the proposed metric apply to non-classification tasks?** The popularity of transfer learning in the classification setting is the primary reason we analyze this setting. The transformation model used in our work includes components such as labels or prior changes, which are most suitable for classification tasks. While the distribution mismatch component could be useful beyond classification tasks, it is unclear as to what form of distribution divergence is most suitable. Developing an appropriate transformation model for auto-regressive tasks and the performance metrics associated with them requires further research. Thus, we believe that it is an excellent suggestion by the reviewers to extend the analysis to other non-classification tasks, however, it is non-trivial and currently out of the scope of this paper. **New References** [a]: An information-theoretic approach to transferability in task transfer learning. Pdf: /pdf/371ed0481cf750ee1da52fb223c1eb98e7c09fcc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Universal Rates of Empirical Risk Minimization
Accept (poster)
Summary: In this paper, the authors build on earlier work in learning theory that provides an alternative to the classical PAC theory in the setting of realizable binary classification. That earlier work considered *universal* learning rates, where a rate is universal if the data source is fixed across sample sizes (as opposed to being allowed to depend on the sample size &n&) and demonstrated a trichotomy in the possible rates of learning. The present work considers a similar setting, where the data distribution is fixed and $n$ increases, but analyzes only the natural algorithm ERM. As opposed to the earlier setting where arbitrary algorithms are allowed, the universal learning rates of ERM satisfy a tetrachotomy, depending on several technical combinatorial notions of complexity of the hypothesis class. The paper complements their theoretical results with a number of concrete examples of hypothesis classes that fall into different cases of the tetrachotomy as well as comparing these cases to the universal rates attainable with more general algorithms than ERM. Strengths: The paper presents interesting and valuable contributions to statistical learning theory, following in the footsteps of an earlier work that seeks to understand learning beyond the classical PAC setting. The algorithm considered, ERM, is classical and the standard approach to virtually all supervised learning problems and thus is an important setting to understand, hewing closer to practice than the more involved algorithms of the earlier work to study universal rates. Furthermore, the presentation is excellent, with a number of helpful concrete examples included for grounding and clear exposition on the technical combinatorial details. Weaknesses: The primary weakness of the work is the restrictive assumption of realizability, as well as the focus purely on binary classification. In the setting studied, however, the paper does a good job of essentially completely answering the question of the universal rates. There is a minor gap in the combinatorial characterization of when rates $1/n$ vs $\log(n)/n$ are expected, but this limitation is adequately discussed in Remark 3. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Are the algorithms that Bousquet et al 2021 use to demonstrate universal learning rates in the absence of computational considerations oracle-efficient with respect to an ERM oracle? If not, it might be good to emphasize this point. If so, what are the computational advantages of considering ERM? 2. A minor typo: in example 6, the VC dimension of halfspaces is $d+1$ not $d$. 3. In the conclusion, you suggest the study of "best-case ERM," but I am confused as to what that means? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging comments and insightful suggestions on our paper. Below, we provide detailed responses to your concerns and questions: 1. For the assumption of realizability, as is traditional in the learning theory literature, we first focus on the realizable case to build insights due to its simplicity, and the next natural step is to extend to the non-realizable (agnostic) setting. 2. The algorithm of [Bousquet et al., 2021] is not expressed as a reduction-to-ERM, and it is really unclear whether such a reduction is possible. Their algorithm is expressed in terms of an extension of Littlestone's SOA algorithm allowing for infinite ordinal values of the Littlestone dimension. Moreover, for finite Littlestone dimension, the recent work of [Assos, Attias, Dagan, Daskalakis, \& Fishelson, 2023] have found that online learning of Littlestone classes is achievable via a reduction to ERM, but this reduction relies on the relation between the Littlestone dimension and threshold dimension, a relation which breaks down for infinite ordinal Littlestone dimensions, since thresholds on the natural numbers do not admit an infinite Littlestone tree. 3. Thank you for pointing out this typo. The VC dimension of halfspaces is d+1. 4. To get an intuition of the so-called "best-case ERM", consider the following alternative to our Definition 2: for the upper bound, "for every ERM algorithm" $\rightarrow$ "there exists an ERM algorithm"; and for the lower bound, "there exists an ERM algorithm" $\rightarrow$ "for every ERM algorithm". In particular for the realizable case, studying the "best-case ERM" is equivalent to studying the learning rates of general proper learners. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thank you for the clarifying points, especially on the reduction to ERM of prior work. I would encourage you to include this discussion in the paper. I maintain my (quite high) score. Great work!
Summary: This paper studies the performance of ERM on realizable binary classification problems in the "Universal learning" framework of [1]. Specifically, the original work [1] showed that the optimal universal rate of convergence was in general not achieved by ERM procedures. Nevertheless, characterizing the universal rate of convergence of ERM procedures as a function of the hypothesis class in this setting is still interesting as these procedures are widely used in practice. This paper tackles this question and shows that if a hypothesis class $\mathcal{H}$ is universally learnable by ERM procedures, then the universal rate of convergence of the worst ERM procedure is either $\exp(-n)$, $1/n$, $\log(n)/n$ or arbitrarily slow. Necessary and sufficient conditions in terms of properties of $\mathcal{H}$ are given that indicate which of these cases occurs. A second result provides a more refined such characterization as a function of the target function $h_{*} \in \mathcal{H}$ as well. [1]: Bousquet, O., Hanneke, S., Moran, S., Van Handel, R., and Yehudayoff, A. (2021), “A theory of universal learning,” in Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 532–541. Strengths: + The problem studied as well as the results are interesting. Weaknesses: + The writing of the paper is at times uncomfortably close to that [1]. Technical Quality: 3 Clarity: 3 Questions for Authors: minor comments: + lines 69-70: "unlike that" -> "while". + line 72: "to the characterization of" -> "characterizing". + line 82: "necessary" -> "necessarily". + line 107: "exist" -> "exists". + line 108: "it requires us" -> "we need" Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: A main limitation of this work is that it does not characterize when a class is universally learnable by ERM. This is mentionned in lines 147-150. More discussion about why this is difficult to achieve would enhance the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments on our paper. We are happy to know that you found the problem and our results interesting. 1. For the weakness, we adopted the same universal learning framework of the work of [Bousquet et al., 2021] as well as some related definitions. However, our theory differs a lot from theirs, including but not limited to a different problem setting (we focus on ERM learners), different possible universal rates, new-developed combinatorial structures and plenty of concrete examples for nuanced analysis. 2. For the questions, we appreciate your careful reading. These are very fair comments on grammar, allowing us to enhance the quality of our manuscript. We will go through the paper to correct other potential errors. 3. For the limitation you mentioned, we would like to extend our sincere thanks to you for pointing out areas where our paper could be improved. In our manuscript, we left the universal learnability by ERM an open question for future work, which is not within the scope of this work. One might guess that the universal Glivenko-Cantelli property might be the correct characterization but it turns out that it is not (see Example 5). We suspect such a characterization would be of a significantly different nature compared to the combinatorial structures we find for the characterizing the rates. For instance, note that the set $\mathcal{H}$ of all concepts with finitely-many 1's or finitely-many 0's is learnable by ERM (albeit at arbitrarily slow rates) if $\mathcal{X}$ is countable but not if $\mathcal{X}$ is the real line. These cases won't be distinguished by a discrete combinatorial structure. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I would like to emphasis that my evaluation is an educated guess, I do not know enough about the work [1] to fairly judge the current work.
Summary: The main goal of the proposed work is to understand the learning procedure with a focus on Empirical risk minimization. The authors claim to provide a complete picture of the four possibilities of different learning rates by ERM. The work also introduces many new concepts related to combinatorial dimensions. Strengths: The paper is extremely well-written. I appreciate the effort put in by the authors to provide such a clear exposition. One of this paper's key strengths is exploring and finally characterizing all possible universal learning rates by ERM for the first time. I particularly like the detailed picture featuring the dichotomies between several learning rates with particular examples. Weaknesses: The paper is overall an enjoyable read, but I am not entirely sure how well it fits NeurIPS since the work is mostly theoretical and centred around learning theory. I believe the paper would be more suitable for a venue entirely dedicated to learning theory, such as COLT. Technical Quality: 4 Clarity: 4 Questions for Authors: Can the authors explain what do they mean by "learning scenarios such as the agnostic case" in l421? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and positive remarks on our paper. For the weakness you mentioned, we would like to point out that, in addition to numerous practical works, the interdisciplinary NeurIPS conference also has a rich history of many seminal purely-theoretical works in learning theory. Here is a response for your question. Mathematically, the agnostic case stands for the situation $\inf_{h\in\mathcal{H}}\text{er}_{P}(h)>0$, which is more practical than realizable case since we usually have no knowledge about the ground-truth model in real-world applications. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanation.
Summary: The work contributes to a recent line of work on universal learning rates, i.e. rates with distribution-dependent constants. It characterizes the universal learning rates for the ERM learning rule (specially, the "worst-case" version thereof), showing a partition into four possible optimal universal ERM rates -- which of these rates the "worst-case" ERM follows determined by combinatorial measures on the hypothesis class. Strengths: Given the ubiquity of ERM, and the looseness of the more classical minimax analyses, the work makes a clear contribution towards the understanding of learning in practice. The work is comprehensive, points out interesting future directions and potential improvements to their theory (e.g. the potential for a simple combinatorial measure separating the $1/n$ and $\log(n)/n$ regimes). Weaknesses: While the examples of different rates in Section 1.1 are expanded on mathematically in the Appendix, they don't add much as written in the body and could use a bit more intuitive framing. The presentation of section 3 feels a bit pedantic at items. Technical Quality: 3 Clarity: 3 Questions for Authors: What does a `"bad ERM algorithm" (line 135) look like? Random selection from the version space? Do you have any conjectures as to if and where the rates for ``best-case'' ERM differ from the rates presented here? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and helpful suggestions on our paper. Below, we provide detailed responses to your comments and questions: 1. For the weakness, we will definitely try to improve the writing and add intuitive explanations in the final version. 2. To answer the first question: the notion of a "worst-case ERM" arises because of the non-uniqueness of the minimizer of the 0-1 loss in classification, i.e., there may be many different ways of breaking this tie, and our analysis aims to provide a rate that holds for all such choices. The notion of "random selection" requires first that there is some well-defined measure on the concept class, and such an idea may require a different analysis. In particular for the Example 4 that you pointed out, we actually give out the explicit form of a bad (the worst) ERM algorithm in Appendix B.1 (see Example 11, lines 584-585), which is a deterministic algorithm. 3. For the second question, in particular for the realizable case, studying the ``best-case ERM" is equivalent to studying the optimal rates of general "proper learners", which is a fascinating and important question for future work. We suspect the categorization of rates will somewhat differ from those we establish for the worst-case ERM, analogously to the uniform analysis in [Bousquet, Hanneke, Moran, \& Zhivotovskiy, 2020]. --- Rebuttal Comment 1.1: Comment: Thanks for the reply and the pointer to Appendix B.1.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper studies the universal rates for ERM learners in the realizable case. While a complete characterization of universal rates for PAC has been studied, it was previously not clear what are the universal rates for the popular ERM learners. This papers presents a tetrachotomy for the ERM learners. In doing so, the authors also develop some new combinatorial complexity measures. Strengths: This paper presents a tetrachotomy for universal rates of the ERM learners: there are only four possible learning rates, i.e., e^{-n}, 1/n, \log(n)/n, or arbitrarily slow. The authors also provide results for target specified universal rates and introduce some new complexity measures. Weaknesses: It seems that lot of techniques used in this paper were borrowed from "A Theory of Universal Learning" by Bousquet et al. Can authors highlight what are the main difficulties of proving universal rates for ERM learners (compared to PAC learners), and what are new techniques developed for that? Also, it's not clear to me what is the "worst-case" ERM algorithm (line 101). Can author elaborate on that? Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback on our paper. Below, we provide detailed responses to your questions: 1. ERM learners are quite different from the designed optimal learners constructed in [Bousquet et al., 2021]. Our analysis reveals a completely different characterization of when ERM learners achieve each of the possible rates, and indeed also reveals a $\log(n)/n$ rate is possible for ERM learners, which does not appear as an optimal universal rate. The main difficulty in establishing our theory comes from identifying the correct characterizations, which are different from those characterize the optimal universal rates. As for the proofs, most of them are actually built on the classic PAC theory of the ERM principle, together with arguments connecting combinatorial structures from PAC theory to appropriate infinite structures, such as eluder sequences, star-eluder sequences, and VC-eluder sequences, instead of the techniques in [Bousquet et al., 2021]. Moreover, the universal learning by ERM has many nuanced cases (those constructed examples in the paper) that we have to consider. 2. The notion of a "worst-case ERM" arises because of the non-uniqueness of the minimizer of the 0-1 loss in classification, i.e., there may be many different ways of breaking this tie, and our analysis aims to provide a rate that holds for all such choices. The so-called "worst-case ERM" is actually reflected in our Definition 2. Note that for the upper bound, if the worst-case ERM can achieve some learning rate, then every ERM algorithm can achieve that rate as well. For the lower bound, if there exists an ERM algorithm that fail to achieve some learning rate, then it must be that this is also true for the worst-case ERM. Another way to define the worst-case ERM is presented in Remark 2, i.e., $\sup_{h\in V_{n}(\mathcal{H})}\text{er}_{P}(h)$, which reflects the error rate of the worst-case ERM. We would like to emphasize that the classic PAC theory of ERM also considers the worst-case ERM, one of the equivalences in the Fundamental Theorem of Statistical Learning is stated as "Any ERM rule is a successful PAC learner for $\mathcal{H}$", which is essentially the same as saying "The worst-case ERM rule is a successful PAC learner for $\mathcal{H}$". --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I have increased my score to 7.
null
null
null
null
null
null
Diffusion Imitation from Observation
Accept (poster)
Summary: This paper introduces Diffusion Imitation from Observation, an adversarial imitation learning method using a conditional diffusion model as the discriminator for policy learning. DIFO learns a diffusion model on expert and agent state transitions, with an auxiliary binary classification objective to discriminate between the expert and the agent. Experiments show that DIFO can learn an imitation policy online, given only expert demonstration observations, outperforming relevant LfO baselines. Ablations and variants show that both loss terms are important for the diffusion model training, and justify the choice of diffusion model conditioning input. Strengths: - Using conditional diffusion models for adversarial imitation learning from observations is a novel approach. - The loss objectives in the diffusion model are well-motivated and shown to be necessary for downstream policy performance. - The trained diffusion model captures the expert distribution, and exhibits some generalizability in the generated trajectories. - Experiment results show that DIFO works across multiple state-based and image-based environments, outperforming previous methods for learning from observation. Weaknesses: - This method requires online interactions to train the diffusion discriminator and the policy. One popular alternative approach in this setting (online + expert demonstrations) is optimal transport-based RL, which also works with only access to expert demo observations (optimal transport over expert and agent state trajectories as the reward). [\[1\]](https://arxiv.org/pdf/2008.09167) [\[2\]](https://arxiv.org/pdf/2206.15469) It would be convincing to see comparisons with this line of work. - It would be great to see more evaluations on image-based environments. Technical Quality: 4 Clarity: 4 Questions for Authors: - It seems unclear to me why behavioral cloning, with access to ground truth action labels, is doing worse than LfO methods in Figure 3. - Does DIFO work with negative samples generated offline, e.g. adding Gaussian noise to the previous state, instead of policy online rollouts? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: - Most experiments are on state-based environments. It would be convincing to see more evaluations on image-based environments. - This method requires online rollouts for collecting negative transitions for discriminator training. It would be great if this method can be shown to work completely offline. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > This method requires online interactions to train the diffusion discriminator and the policy. One popular alternative approach in this setting (online + expert demonstrations) is optimal transport-based RL, which also works with only access to expert demo observations (optimal transport over expert and agent state trajectories as the reward). [1] [2] It would be convincing to see comparisons with this line of work. We thank the reviewer for providing these references. We will revise the paper to discuss these works. As requested by the reviewer, we additionally implemented Optimal Transport (OT) [1]. OT uses proxy rewards derived from Sinkhorn distances rather than directly obtaining rewards from the raw logits of the discriminator, which enhances stability compared to AIL methods. We report the result in Figure R.1 in the PDF file attached to the rebuttal summary, which shows that our method consistently outperforms OT across all evaluated tasks. We hypothesize that this is because OT computes distances at the trajectory level rather than the transition level, which requires monotonic trajectories. Consequently, OT performs well in environments like Walker and AdroitDoor, where trajectory variety is limited. However, it struggles in tasks with diverse trajectories, such as navigation, where the initial and goal states vary significantly. In contrast, our method generates rewards at the transition level, allowing us to identify transition similarities even when facing substantial trajectory variability. This flexibility enables our method to succeed in more complex environments with diverse trajectories. We will revise the paper to include the results and the discussion. > It would be great to see more evaluations on image-based environments. As requested by the reviewer, we additionally conducted experiments on the drawer close task introduced in Meta-World [2] using image-based states. This table-top manipulation task requires the agent to control a Sawyer robotic arm to close a drawer. An illustration of this task is shown in Figure R.5(a) in the PDF file attached to the rebuttal summary. Figure R.5(b) presents the learning efficiency of our proposed method and the baselines. Our proposed method achieves an 80% success rate with only 70k environment interactions, outperforming BC, BCO, GAIfO, WAILfO, AIRLfO, IQ-Learn, and DePO. We will revise the paper to include the results of this new image-based environment. > It seems unclear to me why behavioral cloning, with access to ground truth action labels, is doing worse than LfO methods in Figure 3. While BC has access to ground truth action, it may suffer from compounding errors, i.e., the accumulation of errors from small initial deviations, caused by covariate shifts [3, 4, 5]. Because BC relies solely on learning from the observed expert dataset, unlike the LfO methods that utilize online interaction with environments, BC is susceptible to accumulating errors from states with small deviations, ultimately reaching an irrecoverable point. Therefore, BC is well-known for requiring a substantial amount of expert data to achieve coverage of the dataset and reduce unseen states. However, under our experiment setting with limited expert data, BC doesn’t have sufficient data to generalize to unseen states, leading to poor performance. > Does DIFO work with negative samples generated offline, e.g. adding Gaussian noise to the previous state, instead of policy online rollouts? We thank the reviewer for the insightful idea. This idea of generating and utilizing negative samples offline for imitation learning resembles the idea of Implicit Behavior Cloning (Implicit BC) [6]. Implicit BC learns the joint probability of expert state-action pairs $p(s, a)$ via contrastive learning using expert data and offline generated negative data without policy online rollouts. We believe it is possible to generate negative data offline by adding Gaussian noise to the previous state, as suggested by the reviewer, and learning our diffusion classifier to classify expert state transitions and the generated state transitions. However, unlike Implicit BC, our work focuses on learning from observation, where we do not have access to action labels in expert demonstrations. Hence, without online interactions, it would be impossible to know the action space, let alone learn a policy. Thus, we believe that online interactions are essential for policy learning. **References** [1] Papagiannis et al. "Imitation learning with Sinkhorn Distances." In ECML PKDD, 2022. [2] Yu et al. "Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning." In CoRL, 2019. [3] Ross et al. “Efficient Reductions for Imitation Learning.” In AISTATS, 2010. [4] Ross et al. “A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning.” In AISTATS, 2011. [5] Laskey et al. “SHIV: Reducing supervisor burden in DAgger using support vectors for efficient learning from demonstrations in high dimensional state spaces.” In ICRA, 2016. [6] Florence et al. “Implicit Behavioral Cloning.” In CoRL, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments and comments addressing the concerns. Overall I am positive about the paper and I will maintain my score of Accept. --- Rebuttal 2: Comment: We sincerely thank the reviewer for acknowledging our rebuttal and helping us improve our submission. Title: Re: Official Comment by Reviewer P9FE
Summary: The paper introduces a novel method named Diffusion Imitation from Observation (DIFO), which integrates diffusion models into the adversarial imitation learning from observation (LfO) framework. Traditional adversarial imitation learning methods often struggle with hyperparameter sensitivity and training stability. DIFO leverages the generative capabilities of diffusion models to improve the imitation process. Specifically, the diffusion model generates the next state based on the current state, and its learning objective is reformulated as a binary classification task to distinguish between expert and agent transitions. This model then provides "realness" rewards to guide the policy learning process. The paper demonstrates that DIFO achieves superior performance across various continuous control tasks compared to existing LfO methods. Strengths: 1. The paper introduces a novel approach by integrating diffusion models into the adversarial imitation learning from observation (LfO) framework, enhancing both stability and performance. 2. The proposed DIFO method consistently outperforms existing LfO methods across various continuous control tasks, demonstrating improved data efficiency. Weaknesses: 1. Although the authors name their paper "Diffusion Imitation from Observation," the performance gain does not seem to be due to the use of the diffusion model. As shown in Section 5.7, using the diffusion loss alone demonstrates very poor results. The major contributing factor is the BCE loss and the discriminator as a whole. It appears more like using diffusion loss as regularization for discriminator training. Do other regularization techniques lead to similar improvements, and why are diffusion models more suitable in this setting (an intuitive explanation)? 2. The LfO baselines in the experiments are rather outdated. More recent LfO baselines should be compared and discussed [1, 2]. 3. Minor problems: 1. In Line 111, there seems to be a missing symbol following the first tilde, and the $\phi$ should be the "parameters" of the diffusion model. 2. In the Preliminary section, the symbol $t$ is used to denote the environment step, while later $t$ is again used as the diffusion step, which might cause confusion. It is common practice in diffusion-based RL papers to denote the diffusion step and environment step using different symbols, respectively written as superscripts and subscripts. 3. The "Expert" lines are missing in Figure 3 and Figure 4. [1] Liu, M., Zhu, Z., Zhuang, Y., Zhang, W., Hao, J., Yu, Y., & Wang, J. (2022, June). Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization. In *International Conference on Machine Learning* (pp. 14173-14196). PMLR. [2] Liu, Y., Dong, W., Hu, Y., Wen, C., Yin, Z. H., Zhang, C., & Gao, Y. Imitation Learning from Observation with Automatic Discount Scheduling. In *The Twelfth International Conference on Learning Representations*. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The diffusion discriminator is different from prior methods in that we can sample from the distribution $D(s, s')$. Can the resulting discriminator itself be used as a policy when combined with an inverse dynamics model $p(a|s, s')$, and how does it perform during evaluation? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors listed the limitation of their method in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > As shown in Section 5.7, using the diffusion loss alone demonstrates very poor results. The major contributing factor is the BCE loss and the discriminator as a whole. Diffusion models have shown their ability to become classifiers with diffusion loss [1]. DIFO-NA demonstrates using diffusion loss from an offline pre-trained model provides valid rewards, as shown in Figure 3(a), which indicates diffusion loss could be a reasonable metric for the discriminator. However, as the agent continues to learn, it will generate transitions that increasingly resemble those of the expert, causing the offline model to lose its ability to provide precise rewards eventually. Hence, we integrate the AIL framework to improve the discriminator simultaneously with the policy. > It appears more like using diffusion loss as regularization for discriminator training. Do other regularization techniques lead to similar improvements? WAILfO builds on GAIfO's work by incorporating gradient penalty as a regularization to improve performance. Additionally, our implementations of GAIfO, AIRLfO, and WAILfO already employ L2 regularization. The experimental results show that our proposed method outperforms these methods with various regularizations. We would like to highlight that these regularizations could also be applied to our method. > Why are diffusion models more suitable in this setting (an intuitive explanation)? We hypothesize that there are two reasons why our diffusion model classifier performs better than the GAIL discriminator. First, the instability of GAIL arises from the tendency of discriminators to overfit, resulting in the policy's inability to learn from discriminator rewards. Unlike GAIL's MLP binary classifier, which maps a high-dimensional input to a one-dimensional logit, our diffusion model learns to predict high-dimensional noises, which is inherently more difficult to overfit. Second, diffusion models excel at modeling multimodal distributions and thus can outperform GAIL's discriminator when expert demonstrations exhibit significant variability. We will revise the paper to include these intuitions. > More recent LfO baselines should be compared and discussed [1, 2]. We thank the reviewer for these references. We will revise the paper to discuss these works. As requested by the reviewer, we additionally included the results of two recent methods: Decoupled Policy Optimization (DePO) [2] and Optimal Transport (OT) [3] provided by Reviewer P9FE, which is considered a more general version compared to ADS [4] provided by Reviewer Dab5. We report the results in Figure R.1 in the PDF file attached to the rebuttal summary. The results show that our proposed method outperforms both DePO and OT. DePO decouples the policy into a high-level state planner and an inverse dynamics model (IDM), utilizing embedded decoupled policy gradient and generative adversarial training. The decouple feature makes the planner and IDM transferable across different domains. However, it still suffers from the same challenges as the ones in GAIL, such as overfitting and the inability to model multimodality. Moreover, it also requires training with an IDM, making it more difficult to tune. OT uses proxy rewards derived from Sinkhorn distances rather than directly obtaining rewards from the raw logits of the discriminator, which enhances stability compared to AIL methods. However, OT computes distances at the trajectory level rather than the transition level, which requires monotonic trajectories. Consequently, OT performs well in environments like Walker and AdroitDoor, where trajectory variety is limited. However, it struggles in tasks with diverse trajectories, such as navigation, where the initial and goal states vary significantly. In contrast, our method generates rewards at the transition level, allowing us to identify transition similarities even when facing substantial trajectory variability. This flexibility enables our method to succeed in more complex environments with diverse trajectories. We will include the results and discussion of these two methods in the revised paper. > The diffusion discriminator is different from prior methods in that we can sample from the distribution 𝐷(𝑠,𝑠′). Can the resulting discriminator itself be used as a policy when combined with an inverse dynamics model 𝑝(𝑎|𝑠,𝑠′)? As suggested by the reviewer, we believe that combining an ideal inverse dynamics model (IDM) with our diffusion discriminator could indeed result in an effective policy. This potential is illustrated by our successful trajectory generation in PointMaze experiments (Section 5.5 and Figure 5), where our diffusion discriminator shows the capability to accurately predict the next state based on the current state. However, obtaining a high-quality IDM is challenging, as discussed in lines 27-29. Training IDMs requires data that is well-aligned with the expert's data distribution, while, collecting such data relies on having an effective policy, creating a deadlock. Even if this problem is solved, the planner may also produce invalid transitions for the IDM, posing challenges for both planners and IDMs. Given these difficulties, our method directly learns a policy with the rewards produced by the diffusion discriminator instead of addressing the difficulties of obtaining an ideal IDM. We will revise the paper to make this clear. **References** [1] Li et al. "Your diffusion model is secretly a zero-shot classifier." In ICCV, 2023. [2] Liu et al. "Plan your target and learn your skills: Transferable state-only imitation learning via decoupled policy optimization." In ICML, 2022. [3] Papagiannis et al. "Imitation learning with Sinkhorn Distances." In ECML PKDD, 2022. [4] Liu et al. “Imitation Learning from Observation with Automatic Discount Scheduling.” In ICLR, 2024. --- Rebuttal Comment 1.1: Title: Reminder: The reviewer-author discussion period ends in three days Comment: We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns raised by the reviewer, including the following points. - A clarification of the importance of the BCE and denoising (MSE) losses - Additional results of employing AIL regularization techniques - An intuitive explanation of why diffusion models are suitable for AIL - Additional results of more recent LfO baselines: Decoupled Policy Optimization and Optimal Transport - A discussion of combining our diffusion model and an inverse dynamics model Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.
Summary: This paper introduces Diffusion Imitation from Observation (DIFO), a novel approach to Imitation Learning from Observation (ILfO). DIFO innovates by employing a diffusion model as the discriminator, departing from the conventional feed-forward neural network approach. The authors leverage the connection between the diffusion model's training loss and the Evidence Lower Bound (ELBO) to construct a discriminator based on the relative likelihood of state-pairs originating from the expert rather than the agent. Experimental results across six environments demonstrate DIFO outperforming baseline methods. Strengths: The utilization of diffusion models' loss as ELBO to build a discriminator presents a novel and potentially fruitful direction in the field of ILfO. Weaknesses: - While DIFO demonstrates sample efficiency in terms of expert demonstrations, it lacks efficiency in environment interactions. - The experimental scope is limited, with tests conducted on only six environments and comparisons made against relatively older baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: - There are other methods [1, 2, 3] that use metrics, i.e. likelihood, ELBO and entropy, from generative model for ILfO. I suggest discussing them in the related works. - How many samples of $t$ and $\epsilon$ are needed to compute the discriminator score in eq. 4? Is the value stable across different samples? - Although you are motivated to improve the robustness of AILfO, I don't see a direct connection that using a diffusion model instead of an FFN should make things more stable, since in the end you are still running adversarial learning. Could you please elaborate on this? - In Figure 7, the lines of $(\lambda_\text{MSE}, \lambda_\text{BCE}) = (1, 10^{-2})$ and $(\lambda_\text{MSE}, \lambda_\text{BCE}) = (1, 10^{-1})$ are too similar to each other both in terms of colours and symbols and is really hard to distinguish. Could you at least change one of them to different colour or symbol? Reference: [1] Escontrela *et al.*, Video Prediction Models as Rewards for Reinforcement Learning, NeurIPS 2023 [2] Zhang *et al.*, Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models, NeurIPS 2023 [3] Huang *et al.,* Diffusion Reward: Learning Rewards via Conditional Video Diffusion Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the author has discussed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > There are other methods [1, 2, 3] that use metrics, i.e. likelihood, ELBO and entropy, from generative model for ILfO. I suggest discussing them in the related works. We thank the reviewer for providing the references. We will revise the paper to discuss these methods using metrics from generative models for ILfO. > How many samples of and are needed to compute the discriminator score in eq. 4? Is the value stable across different samples? We sample a single denoising timestep (n=1) to compute the reward. As suggested by the reviewer, to investigate the robustness of our rewards with a varying number of timestep samples, we conducted experiments in PointMaze by averaging different numbers of timestep samples (n=1, 2, 5, 10) as rewards. The result presented in Figure R.3 in the PDF file attached to the rebuttal summary shows that our method can learn smoothly with a single sample. To verify the stability of the rewards, we present the standard deviation to mean ratio of the rewards from 500 timesteps across different numbers of timestep samples in the table below. The values are averaged from a batch of 4096 transitions. The result below suggests that the reward is stable. |Learning Progress|n=1|n=2|n=5|n=10| |-|-|-|-|-| |20%|0.323|0.237|0.294|0.246| |40%|0.234|0.199|0.206|0.230| |60%|0.201|0.175|0.157|0.206| |80%|0.157|0.152|0.150|0.190| |100%|0.142|0.133|0.145|0.161| > Although you are motivated to improve the robustness of AILfO, I don't see a direct connection that using a diffusion model instead of an FFN should make things more stable We hypothesize that the instability of most existing AIL frameworks arises from the tendency of FFN discriminators to overfit, resulting in the policy's inability to learn from discriminator rewards. Unlike FFN binary classifiers, which map a high-dimensional input to a one-dimensional logit, our diffusion model learns to predict high-dimensional noises, which is inherently more difficult to overfit. Furthermore, expert demonstrations typically exhibit significant variability, which could be difficult for FFNs to model effectively [1]. Diffusion models, however, are adept at handling such multimodality, providing a more robust approach for capturing the diverse patterns present in expert behavior. We will revise the paper to include these insights. > While DIFO demonstrates sample efficiency in terms of expert demonstrations, it lacks efficiency in environment interactions. We respectfully disagree with the reviewer. Figure 3 in the main paper shows that our method, DIFO, is very efficient in environmental interactions, i.e., sample efficiency. In PointMaze, DIFO only requires about 175k environment steps to achieve a 60% success rate, while the best-performing baseline AIRLfO needs over 400k steps. Similarly, in AdroitDoor, DIFO achieves a 75% success rate with only 4M steps, yet the best-performing baseline WAILfO needs around 10M steps. That said, DIFO is very sample-efficient. The table below presents the number of environment steps required for each method to achieve 50% expert performance. The result shows that our method, DIFO, can achieve the same performance with significantly fewer environmental steps (environment interactions). That said, DIFO is very sample-efficient. |Environment|Goal|DIFO (Ours)|BCO|GAILfO|WAILfO|AIRLfO|IQ-Learn| |-|-|-|-|-|-|-|-| |PointMaze|Success rate=50%|120k|100k|850k|x|310k|160k| |AntMaze|Success rate=50%|1.1M|x|x|x|x|x| |FetchPush|Success rate=50%|630k|x|510k|670k|x|x| |AdroitDoor|Success rate=50%|3.1M|x|x|4.9M|x|x| |Walker|Return=3000|1.0M|x|2.0M|1.7M|x|x| |CarRacing|Return=400|400k|x|1.1M|x|x|x| > The experimental scope is limited, with tests conducted on only six environments and comparisons made against relatively older baselines. We conducted extensive experiments to evaluate our method across various aspects. - The six test environments we selected cover a wide range of domains, including navigation (PointMaze and AntMaze), manipulation (FetchPush and AdroitDoor), locomotion (Walker), and games (CarRacing). In contrast, most existing works [2, 3, 4, 5] focus only on locomotion tasks. - The tasks included in our work cover both vectorized-state-based environments (PointMaze, AntMazmethod'sPush, AdroitDoor, and Walker) and image-based environments (CarRacing and DrawerClose), while most existing works [2, 3, 4, 5, 6, 7, 8, 9] only consider vectorized-state-based environments. - Our work analyzes the data efficiency by varying the amount of available expert demonstrations and evaluating the learning performance of all methods in Section 5.4. - We designed and conducted a toy environment to visualize the reward functions learned by our proposed method and GAIfO. > More baselines Please see the response to Reviewer Dab5. **References** [1] Li et al. "Infogail: Interpretable imitation learning from visual demonstrations." In NIPS, 2017. [2] Liu et al. "Plan your target and learn your skills: Transferable state-only imitation learning via decoupled policy optimization." ICML, 2022. [3] Papagiannis et al. "Imitation learning with sinkhorn distances." In ECML PKDD, 2022. [4] Ni et al. "f-irl: Inverse reinforcement learning via state marginal matching." In CoRL, 2021. [5] Liu et al. “CEIL: Generalized Contextual Imitation Learning.”. In NeurIPS, 2023. [6] Garg et al. "Iq-learn: Inverse soft-q learning for imitation." In NeurIPS, 2021. [7] Fu et al. "Learning robust rewards with adversarial inverse reinforcement learning." In ICLR, 2018. [8] Ho et al. "Generative adversarial imitation learning." In NIPS, 2016. [9] Xiao et al. "Wasserstein adversarial imitation learning." arXiv, 2019. --- Rebuttal Comment 1.1: Title: Looks good, I will increase my score Comment: I would like to thank for the author's rebuttal. The additional experiments and clarifications do make the results more convencing. In the light of this, I will increase my score from 5 to 6. --- Reply to Comment 1.1.1: Title: Re: Looks good, I will increase my score Comment: We sincerely thank the reviewer for acknowledging our rebuttal and helping us improve our submission.
Summary: This paper leverages a diffusion model to learn a expert-state transition and additionally a discriminator that can differentiate expert and agent states. The paper conducts experiments on standard RL environments and demonstrate better performance and data efficiency. Strengths: - The idea is simple and its experimental results demonstrate strong performance and data efficiency - The paper is well written in general Weaknesses: - Under section 4.3, why do we only learn the MSE on the expert data but not also the agent data? My understanding is that both data will impact the discriminator output based on section 4.2. - Diffusion models are usually used for modelling multimodal distribution. I have two questions: - Is the expert stochastic? Does the expert policy exhibit diverse behaviours? - Is it more convincing to have stochastic environments rather than purely deterministic environments? - The experimentation seems very limited for the amount of theory it provides---in particular there is no theory regarding whether the proposed objective is truly sound. - Tuning the $\lambda$'s seem difficult as they can vary by ~20% success rate according to figure 7. How should one address this? Technical Quality: 2 Clarity: 3 Questions for Authors: **Questions** - On page 8, line 298, the paper indicates that a "smoother contour" allows for bringing a learning agent closer to the expert. I was wondering if this property is only useful for specific environments but not in general. Dense reward may be helpful but often can misguide policy learning. **Possible typos** - Page 3, line 90: "can't" should be "cannot" Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The approach leverages only expert data, it remains to extend this to setting with suboptimal demonstrations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. > the paper indicates that a "smoother contour" allows for bringing a learning agent closer to the expert. I was wondering if this property is only useful for specific environments but not in general. Dense reward may be helpful but often can misguide policy learning. We thank the reviewer for the question. The objective of RL is to maximize the overall return. The RL algorithm should converge to the optimal policy as long as the reward function produces the highest reward at the optimal transition. A smoother contour helps guide the RL algorithm without misguiding the final policy. Our method's reward peaks align with expert distribution (see Figure 7(a) in the main paper), ensuring accurate guidance. Moreover, under the AIL framework, the reward function evolves alongside the policy. The reward function becomes sparser as the policy improves and eventually converges to a relatively sparse condition. Detailed theoretical guarantees can be found in the GAIL paper [1]. Besides, many previous imitation learning and inverse RL papers visualize the reward function with similar plots like ours [2, 3, 4, 5]. Hence, we believe that this approach is widely accepted for evaluating reward functions. Lastly, we would like to highlight that we evaluate DIFO in diverse domains, including navigation, manipulation, locomotion, and control, in both state- and image-based environments. Our evaluation results demonstrate that DIFO is broadly applicable and not restricted to specific environments. > Why do we only learn the MSE on the expert data but not also the agent data? Please see the overall response. > Is the expert stochastic? Does the expert policy exhibit diverse behaviors? All of our expert data incorporates stochasticity to enhance diversity in trajectories. In the PointMaze environment, our expert trajectory data is generated using a breadth-first search planner with added stochasticity to create different path choices, resulting in various trajectories for the same start and end positions. In other environments, we add a small amount of noise to the experts' actions. These methods provide stochasticity and multimodality in expert behaviors. Our model effectively learns from such expert demonstrations, demonstrating its capability to handle multimodal distributions. > Diffusion models are usually used for modeling multimodal distribution. As discussed in Section 5.5 and Section D of the main paper, we use a trained diffusion model to generate maze trajectories. As previously mentioned, our expert data exhibits a certain degree of multimodality, meaning that there are different paths in the dataset for the same start and goal locations. The generated results support our observations, as our model effectively produces multimodal possible paths. This demonstrates the benefit of modeling multimodal distributions using diffusion models. > Is it more convincing to have stochastic environments rather than purely deterministic environments? Please see the overall response. > The experimentation seems very limited for the amount of theory it provides---in particular there is no theory regarding whether the proposed objective is truly sound. Our method builds upon the established theories of Diffusion Classifier [6] and GAIL [1], as described in Section 4. The Diffusion Classifier paper shows that the diffusion loss (Equations 2 and 3) approximates ELBO, effectively transforming a diffusion model into a classifier. GAIL proves that IRL is a dual problem of an occupancy measure matching problem. Consequently, running RL with the discriminator IRL reward, which recovers the primal optimum from the dual optimum, ensures that the resulting policy is the primal optimum. As a result, by adversarially training the discriminator and the policy, we can match the distribution of the agent's policy to that of the expert. By incorporating the diffusion model as the discriminator into the AIL framework, our method leverages the strengths of the diffusion model while maintaining optimality guarantees. We will revise the paper to include these details. We believe our experiments are comprehensive. Please see the response to Reviewer AzJp. > Tuning the lambda seem difficult as they can vary by ~20% success rate according to figure 7. Figure 7 aims to investigate the effect of $\lambda$, and therefore, we experimented with a very wide range of values. The performance only drops by 20% when the lambda is increased or decreased by two orders of magnitude from the optimal value we identify. In Figure 7(a), the performance of $\lambda=0.1$ and $0.01$ is comparable, with overlapping standard deviations, indicating no statistically significant difference between them. Similarly, in Figure 7(b), although the performance appears to vary slightly, the standard deviations overlap. Therefore, we believe DIFO is not sensitive to $\lambda$. Any $\lambda$s in a reasonable range [0.1~0.001] can yield good performance. **References** [1] Ho et al. "Generative adversarial imitation learning." In NIPS, 2016. [2] Garg et al. "Iq-learn: Inverse soft-q learning for imitation." In NeurIPS, 2021. [3] Fu et al. "Learning robust rewards with adversarial inverse reinforcement learning." In ICLR, 2018. [4] Ni et al. "f-irl: Inverse reinforcement learning via state marginal matching." In CoRL, 2021. [5] Liu et al. "Energy-based imitation learning." In AAMAS, 2021. [6] Li et al. "Your diffusion model is secretly a zero-shot classifier." In ICCV, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the additional experiments. In short, I will increase my score. Regarding the theory: Please include the response as a high-level intuition to explain why this approach is reasonable. Although I was expecting something like convergence bounds, sample-complexity bounds, etc. Regarding $\lambda$: Please also include a short sentence on this, I believe this can help practitioners when they aim to reproduce this work. Further question about the computational cost, how much time should one expect when running the proposed algorithm when compared to existing ones? --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer 7W6h Comment: We sincerely thank the reviewer for acknowledging our rebuttal and helping us improve our submission. As suggested by the reviewer, we will revise the paper to include high-level intuitions explaining why this approach is reasonable and a detailed description of $\lambda$ as we provided in the rebuttal. Moreover, we will make the codes, scripts, datasets, and model weights publicly available to ensure reproducibility. **Computational cost**: Below, we provide how much time it took roughly for our method to learn each task. Note that the time provided below was estimated when multiple jobs were running on the same workstation, and, therefore, the running time could be overestimated. The workstation is equipped with an Intel Xeon W7-2475X CPU, two RTX 4090 GPUs, and two 64GB DDR5-4800 RDIMM RAMs. |Task|PointMaze|AntMaze|FetchPush|AdroitDoor|Walker|CarRacing| |-|-|-|-|-|-|-| |**Time**|40 minutes|3 hours|2.5 hours|5 hours|7 hours|8 hours| We will include this information in the revised paper.
Rebuttal 1: Rebuttal: The attached PDF file contains the following content: - **Two additional baselines (DePO and OT) and the results [Reviewer AzJp, Reviewer Dab5, Reviewer P9FE]**: We additionally include two recent and relevant baselines: Decoupled Policy Optimization (DePO, 2022) [1] and Optimal Transport (OT, 2022) [2] suggested by reviewer Dab5 and P9FE. These baselines represent two other major families of LfO approaches: inverse dynamics models (IDM) and sequence matching. The experiment results presented in Figure R.1 show that our method outperforms both DePO and OT across all tasks. - **Optimizing the MSE loss with agent data [Reviewer 7W6h]**: We evaluate optimizing the diffusion model denoising loss $\mathcal{L}\_{\text{MSE}}$ with and without agent data in all tasks. The results presented in Figure R.2 show that optimizing $\mathcal{L}\_{\text{MSE}}$ with agent data leads to slower convergence and less stable performance, which justifies our choice of only optimizing $\mathcal{L}\_{\text{MSE}}$ with expert data. - **Sampling different numbers of denoising timesteps for computing rewards [Reviewer AzJp]**: To examine how varying the number of denoising timestep sampled for computing rewards impacts policy learning, we experiment with different sample sizes (n=1, 2, 5, 10) in PointMaze. The results are shown in Figure R.3, indicating the effect of various numbers of samples is statistically insignificant, which justifies our choice of using one timestep sample for computing rewards. - **Stochastic environments [Reviewer 7W6h]**: To investigate our diffusion model's ability to handle stochastic data, we created a stochastic AntMaze environment by adding strong Gaussian noise with a standard deviation of 0.5 to the actions. The results are shown in Figure R.4, demonstrating our method maintains robust performance under stochasticity. - **Additional image-based Meta-World drawer close task [Reviewer P9FE]**: In response to the suggestion of adding more image-based tasks, we introduced the drawer close task from Meta-World [3]. This table-top manipulation task requires the agent to control a Sawyer robotic arm to close a drawer. Figure R.5(a) provides a screenshot of the task, and the learning efficiency is depicted in Figure R.5(b). Our method, DIFO, demonstrates superior performance compared to the baselines, including BC, BCO, GAIfO, WAILfO, AIRLfO, IQ-Learn, and DePO. > [Reviewer 7W6h] Why do we only learn the MSE on the expert data but not also the agent data? Please see the overall response. As pointed out by the reviewer, our method optimizes $\mathcal{L}\_{\text{MSE}}$ (Eq. 6) to approximate the ELBO only using expert demonstrations. We hypothesize that optimizing $\mathcal{L}\_{\text{MSE}}$ leads to unstable training because, during the early stage of training, the agent policy changes frequently and generates diverse transitions. To investigate the effect of optimizing this MSE loss using agent data, we experiment with optimizing $\mathcal{L}\_{\text{MSE}}$ with and without agent data on all tasks. The results are reported in Figure R.2 in the PDF file attached to the rebuttal summary. We found that optimizing $\mathcal{L}\_{\text{MSE}}$ with agent data can lead to slower and unstable convergence, especially in tasks with larger state and action spaces, e.g., AdroitDoor, where optimizing $\mathcal{L}\_{\text{MSE}}$ leads to a 0\% success rate. We hypothesize that the rapidly changing agent distribution hinders learning the diffusion model optimizing $\mathcal{L}\_{\text{MSE}}$ with agent data. As a result, the overall performance can be less stable. Hence, we design our method to optimize $\mathcal{L}\_{\text{MSE}}$ using only expert data. We thank the reviewer for the question. We will revise the paper to include this experiment and the discussion. > [Reviewer 7W6h] Is it more convincing to have stochastic environments rather than purely deterministic environments? We thank the reviewer for the suggestion. We additionally created a new stochastic AntMaze environment where Gaussian noise is added to the agent's actions before they are applied in the environment. The magnitude of the noise is 0.5, resulting in the actual action taken in the environment would be $action = action + 0.5 *\mathcal{N}(0,1)$. Given the action space of this environment is [-1,1], this represents a high level of stochasticity. We report the result in Figure R.4 in the PDF file attached to the rebuttal summary. The result shows that the performance of our method remains robust even under such high stochasticity, indicating our model's ability to adapt to stochastic environments effectively. We appreciate the reviewer's insightful suggestion, which leads to this experiment that strengthens our paper. We will include this result in the revised paper. **References** [1] Liu et al. "Plan your target and learn your skills: Transferable state-only imitation learning via decoupled policy optimization." In ICML, 2022. [2] Papagiannis et al. "Imitation learning with Sinkhorn Distances." In ECML PKDD, 2022. [3] Yu et al. "Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning." In CoRL, 2019. Pdf: /pdf/396edd2f54d6e39d2369e5613223576e51dab933.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Graph neural networks and non-commuting operators
Accept (poster)
Summary: This paper considers the tasks of extending GNNs to multiple graphs with a shared node set (and the edges representing different types of relations). In order to do this in a principled manner, the authors consider the algebra of non-commutative polynomials and provide a detailed analysis of stability to perturbations. The authors also extend their setting to graphons and show several consistency results (where the practical implications are to transferability). Notably, the proofs are well written with careful attention to detail and do not suffer from many of the issues common to proofs in ML papers. (See e.g., Remark 16.) Overall, this paper provides a solid, easily understandable, theoretical framework for an important problem (learning on multigraphs). Acceptance is recommended. Strengths: Section 6, connecting the stability guarantees to the training procedure is interesting! Theory is novel, interesting and well explained. The paper is generally well written and well motivated The introduction go WtNNs is interesting, well motivated, and provides a solid theoretical framework for transferability of NNs on multigraphs Weaknesses: It would be helpful if the sections on non-commutative polynomials and operator filters discussed related work on algebraic signal processing and defining neural nets on arbitrary measure spaces, e.g., https://arxiv.org/pdf/2210.17425, https://arxiv.org/pdf/2108.09923, https://arxiv.org/pdf/2210.16272, https://arxiv.org/pdf/2009.01433, https://arxiv.org/abs/2208.08561 Corollaries 2 and 3 are nice, but should probably be combined into a single corollary. Additionally, it looks like equation 5 could be fit onto a single line (possibly with a small amount of negative hspace.) A more detailed explanation of Example 6 would be helpful. I had to read it several times in order to understand Having some experiments with real data would be helpful (although I do recognize that the contributions of this paper are primarily theoretical). A short conclusion would be nice in the camera ready copy. Line 494: It is not clear where "block operator" norms were introduced In Lemma 10 (and throughout) the authors should make it more clear that \| \cdot \| refers to the \ell^2 norm since there are many different spaces in this paper. Lemma 14 is out of place, it should be before the proof of Lemma 14 (which means it would then become Lemma 13) More details should be added to the proof of theorem 7 Releasing a colab notebook after the review process is okay, but an anonymous github would have been better to improve upon reproducibility Minor: Line 91: "behavior than" should be "behavior as" Line 97: the phrase "loopless" could be confusing. Should make it more clear you mean no self loops The term End(W) is used without definition. I know what this means, but it would be better to discuss in order to keep the paper self contained for an ML audience. Line 167: Cartesian should have a capital C. The authors should check throughout for other words derived from last names, e.g., Euclidean or Laplacian Line 237: V^{(s)} should be V^{(n)}, right? Line 313: "Corollary" should be "corollary" Linne 494: "Lemma" should be "lemma" Technical Quality: 3 Clarity: 4 Questions for Authors: Does any of the graphon theory extend to random graphs derived from graphons? Do you think that analogs of the transferability results for graphon operators can be produced for settings where the graph is derived by subsampling a manifold as in https://arxiv.org/pdf/2305.18467, https://arxiv.org/pdf/2210.00376, https://arxiv.org/pdf/2212.12606, https://arxiv.org/pdf/2307.04056 How does theorem 1 compare with theorem 2 of this paper https://arxiv.org/pdf/2108.09923 Is it possible to define things in any sort of Fourier domain? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and positive review. See below our comments and answers to your questions. >Discuss related work ... We thank the reviewer for pointing this out. We have added all the above references as relevant related work in the Introduction. The new paragraph is in a comment for completeness >Corollaries 2 and 3 are nice, but should probably be combined into a single corollary. We believe keeping two corollaries is useful since the first only applies to single blocks and could potentially be used as a step for understanding stability on a different architecture whereas the second one is useful to speak about a large class of examples. >A more detailed explanation of Example 6 would be helpful. I had to read it several times in order to understand We have rewritten Example 6 aiming to improve its clarity. We paste it in a comment for completeness. >Having some experiments with real data would be helpful (although I do recognize that the contributions of this paper are primarily theoretical). We agree with the reviewers’ comment and have added the results of our initial experiments on Recommendation Systems on real-world data in a new Final Section entitle “Experiments with real-world data” (see global rebuttal for details). We believe this is a very interesting direction which deserves further exploration and we intend to make it the focus of future work. >A short conclusion would be nice in the cam ready copy. We have added it. Please see Comments for details. >Line 494: It is not clear where "block operator" norms were introduced We have added a reference to the precise Section. >In Lemma 10 (and throughout) the authors should make it more clear that | | refers to the \ell^2 norm since there are many different spaces in this paper. We have rewritten the first paragraph of the Section to clarify the meaning of block norms and the fact that on individual components we are referring to the L^2 norm defined by the measure \mu_V >Lemma 14 is out of place, it should be before the proof of Lemma 14 (which means it would then become Lemma 13) We agree and thank the reviewer for pointing this out! We have placed it properly. >More details should be added to the proof of theorem 7 We rewrote the proof to make it clearer. The new version is in a comment for completeness. >Releasing a colab notebook after the review process is okay, but an anonymous github would have been better to improve upon reproducibility We have added two github repositories containing the scripts used for the article: https://anonymous.4open.science/r/GtNN_weighted_circulant_graphs-27EB https://anonymous.4open.science/r/operatorNetworks-508F >Does any of the graphon theory extend to random graphs derived from graphons? Please see rebuttal of point 2 of reviewer (eJ8g) for a general perspective. > Do you think that analogs of the transferability results for graphon operators can be produced for settings where the graph is derived by subsampling a manifold as [references]? We believe the answer is yes. Our Theorems imply that whenever one is given a family of converging operators in the rather weak, (and thus widely applicable) operator norm, transferability results hold. In particular it should be possible to use the spectral theory of convergence of laplacian graphs on manifolds (as developed for instance in the interesting work of Calder and Garcia-Trillos [1]) to prove novel transferability results for operator networks operating in this setting. [1] Jeff Calder, Nicolás García Trillos, Improved spectral convergence rates for graph Laplacians on ε-graphs and k-NN graphs, Applied and Computational Harmonic Analysis, Volume 60,2022. >How does theorem 1 compare with theorem 2 of this paper https://arxiv.org/pdf/2108.09923 We thank the reviewer for pointing out the above article [denoted PBR]. There are commonalities between the current submission and [PBR] so we have added it as important previous work. Both [PBR] and our article are interested in the stability of filters obtained from representations of operator algebras. The key difference between our stability results and those of [PBR] stem from the choice of norm. More specifically: Theorem 2 in [PBR] gives a local perturbation result in the Hilbert-Schmidt norm and contains an unspecified error term. By contrast in our Theorem 1 we give up on the Hilbert-Schmidt norm and instead obtain an upper bound in the operator norm without any unspecified error term. Our result is easily computable, because it depends explicitly on the coefficients of noncommutative polynomials. Our Theorem 1 and [BPR, Theorem 2] are thus complementary inequalities, neither objectively stronger than the other and both could be applied in different circumstances. For the kinds of applications we discuss--transferability, the operator norm is a clearly advantageous choice,giving novel Universal transferability results. The fact that our stability bounds are easily computable in terms of polynomial coefficients plays a key role in the novel "stable" training algorithms we propose. >Is it possible to define things in any sort of Fourier domain? We believe the answer is no in general. The Fourier signal processing point of view is basically about diagonalizing the shift operator, however from basic matrix theory we know that a family of operators is simultaneously dagonalizable if and only if it is commutative and each element of the family is diagonalizable so commutativity seems essential. An interesting possible extension is to operator tuples which are representations of reductive noncommutative groups. However, even for that case the matrices would become only simultaneously block diagonal which would be rather far from the original Fourier transform (although there is a noncommutative Fourier transform that could provide a weak substitute in that case). Nothing like this is available for general noncommuting operators, which are the main focus. --- Rebuttal 2: Comment: **New paragraph referring to related work in the intro** The goal of this paper is to extend the mathematical theory of GNNs to account for multimodal graph settings. The most closely related existing work is the algebraic neural network theory of Parada-Mayorga, Butler and Ribeiro [PMR1][PMR2][PMR3] who pioneer the use of algebras of non-commuting operators. The setting in this paper could be thought of as a special case of this theory. However, there is a crucial difference: whereas the main results in the articles above refer to the Hilbert-Schmidt norm, we introduce and analyze block-operator-norms on non-commutative algebras acting on function spaces. This choice allows us to prove stronger stability and transferability bounds that when restricted to classical GNNs improve upon or complement the state-of-the-art theory. In particular, we complement work in \cite{ruiz2021graph} by delivering bounds that do not exhibit no-transferable energy, and we complement results in \cite{maskey2023transferability} by providing stability bounds that do not require convergence. Our bounds are furthermore easily computable in terms of the networks' parameters improving on the results of~\cite{PMR2} and in particular allow us to devise novel training algorithms with stability guarantees. **New writing of example 6:** Fix $p\in (0,1)$ and let $W(x,y)=p$ for $x\neq y$ and zero otherwise. The graphon Erdos-Renyi graphs $G^{(n)}$ constructed from $W$ as in $(2)$ above are precisely the usual Erdos-Renyi graphs. Part (2) of the sampling Lemma guarantees that $T_W$ converges to $T_{\hat{G}^{(n)}}$ almost surely in the operator norm. By contrast we will show that the sequence **does not converge** to $T_W$ in the Hilbert-Schmidt norm by proving that $\|T_{\hat{G}^{(n)}}(x,y)-T_W(x,y)\|_{\rm HS}>\min(p,1-p)>0$ almost surely. To this end note that for every $n\in \mathbb{N}$ and every $(x,y)\in [0,1]^2$ with $x\neq y$ the difference $|W_{\hat{G}^{(n)}}(x,y)-W(x,y)|\geq \min(p,1-p)$ since the term on the left is either $0$ or $1$. We conclude that $\|T_{\hat{G}^{(n)}}(x,y)-T_W(x,y)\|_{\rm HS} = \|W_{\hat{G}^{(n)}}(x,y)-W(x,y)\|_{L^2([0,1]^2)}\geq \min(p,1-p)>0$ for every $n$ and therefore the sequence fails to converge to zero almost surely. **Short Conclusion (to be added at the end of the article)** In this paper, we introduce graph-tuple networks, a way of extending GNNs to the multimodal graph setting through the use tuples of non-commutative operators endowed with appropriate block-operator norms. We show that GtNNs have several desireable properties such as stability to perturbations and a Universal transferability property on convergent graph-tuples, where the transferability error goes to zero as the graph size goes to infinity. Our transferability theorem improves upon the current state even for the GNN case. Furthermore our error bounds are expressed in terms of computable quantities from the model. This motivates our novel algorithm to enforce stability during training. Experimental results show that our transferability error bounds are reasonably tight, and that our algorithm increases the stability with respect to graph perturbation. In addition, they demonstrate that the transferability theorem holds even for sparse graph tuples. Finally, the experiments on the movie recommendation system suggest that allowing for architectures based on GtNNs is of potential advantage in real-world applications. Title: Comments accompanying the rebuttal --- Rebuttal Comment 2.1: Title: Good Job Comment: I thank the authors for their thorough response.
Summary: This paper introduces a new type of neural network called graph-tuple neural networks (GtNN) that can handle multiple graphs with the same set of nodes and non-commuting graph operators. The authors develop a mathematical theory to show the stability and transferability of GtNNs and derive related bounds, proving a universal transferability theorem for these networks. Their results extend existing transferability theorems for traditional graph neural networks and include experiments on synthetic data to demonstrate their findings. Strengths: 1. The author put forward a novel theoretical framework GtNN to model a general case when considering transferability and stability, and the Universal transferability Theorem to quantify. 2. A generative method of mentioned graph sequence is provided instead of merely assuming the existence of the sequence they used in their theoretical framework. 3. Define the graphon-tuple neural networks (WtNNs) to help theoretical derivation. Weaknesses: 1. No experiments on real data to prove the transferability and stability gain using their method compared to GNN. 2. The theoretical background knowledge is not adequately provided. 3. Lacking proper reasoning phases to connect each sections. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Can this method be viewed as a training strategy that improves every existing Graph Mining method? 2. Since WtNNs are “the natural limits of (GtNNs) as the number of vertices grows to infinity”, why did you discuss WtNNs right after the Perturbation inequalities instead of elaborating on GtNNs first. 3. What is the convergence order, i.e. O(N^k), of the quantity when N goes to infinity? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors did not adequate discuss the limitation of their work. They may consider talk about how to implement their complicate theories in a understandable way to help this work to gain influence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions. We address them below: >No experiments on real data to prove the transferability and stability gain using their method compared to GNN. We have included some initial results on using operator networks in real-world data in a new final Section of the article (see brief summary in the global rebuttal), we have also added some synthetic transferability experiments. The main contribution of the article is however Theoretical and we have added a proof of tightness of our inequalities in the new Remark 13 in Appendix C as objective proof of the applicability of our results (see rebuttal to Reviewer eJ8g). >The theoretical background knowledge is not adequately provided. We have made an effort to add some background knowledge via adding a reference to Hilbert spaces and norms [1] that are needed for a thorough understanding of the article. We also have verified that all novel terms are defined in a rigorous and logically consistent fashion. We kindly ask the reviewer to let us know what background needs further clarification so we can improve the manuscript. [1] Kostrikin, Alexei I.; Manin, Yuri I. Linear algebra and geometry. Translated from the second Russian (1986) edition by M. E. Alferieff. Revised reprint of the 1989 English edition. Algebra, Logic and Applications, 1997. >Lacking proper reasoning phases to connect each sections. We disagree with the reviewer. Each Section begins with clarifying how it connects with the rest of the paper. Due to space limitations it has not been possible to make this more extensive. >Can this method be viewed as a training strategy that improves every existing Graph Mining method? In order to apply this method as a training strategy one needs computable bounds which estimate the stability of the resulting network for each choice of the parameters. The main contribution of our work is the discovery of an easily computable form for such bounds: this is the key behind the training strategy. The same method could thus apply to any area where such bounds are available. >Since WtNNs are “the natural limits of (GtNNs) as the number of vertices grows to infinity”, why did you discuss WtNNs right after the Perturbation inequalities instead of elaborating on GtNNs first. We agree and thank the reviewer for pointing this out! We have moved the key example of GtNNs to the end of Section 2, which occurs before the definition of graphon, as it should be to facilitate understanding. >What is the convergence order, i.e. O(N^k), of the quantity when N goes to infinity? The proof of the Theorem 8 clarifies that the speed of convergence is determined by the speed at which the sequence of induced graphons converges to the limiting graphon multiplied by a computable constant which depends on the coefficients of the noncommutative polynomials (as in Theorem 7). This quantity is not a fixed N^k, it could vary drastically from one graphon sequence to another. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for your rebuttal. After reading the rebuttal, I decide to maintain my score.
Summary: This paper considers the node-level learning task with several graphs sharing the same vertex set called graph-tuple neural networks (GtNNs). The stability and transferability of GtNNs are studied using properties of non-commuting non-expansive operators. The authors show that all GtNNs are transferable on convergent graph-tuple sequences using graphon-tuple neural network limits. They also illustrate the theoretical results with experiments on synthetic data. Strengths: 1. A theoretical framework based on operator networks for analyzing graph neural networks on multimodal data where several graphs share the same set of nodes is proposed. Using this framework, a perturbation inequality, which provides the perturbation stability of non-expansive operator-tuple networks, is proven. The proposed framework may be extended to accommodate other settings and can be used to study other tasks related to GNNs. 2. The authors also define the graphon-tuple neural networks (WtNNs) which are natural limits of GtNNs. Further, a Universal transferability Theorem for graphon-graph transference, which guarantees that the GtNN learned on a graph-tuple with sufficient many vertices transfers to other graph-tuples, is shown. This transferability result is original and addresses the multimodal learning aspect in GNNs. 3. This work contributes to the literature of analyzing GNNs through graph limits and the study of the multimodal setting through operator networks is original. The paper is clearly written. Weaknesses: 1. The tightness of the bounds is only demonstrated through numerical experiments rather than formal theoretical analysis. 2. The results are based on strong assumptions such as the existence of graphons induced by the graphs, the continuity of the graphons, the equispaceness of the vertex set, and the graph-tuple convergence in the operator norm. However, these assumptions may be standard in the literature. 3. The results lack support from real-world experiments. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the operator network framework be adapted to the case when the graphs have only partial overlap rather than the same set of vertices? 2. Is it possible to extend the current work to *sparse* graphs when there are no induced graphons? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations of the work and clearly stated the assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and positive feedback. We address each of your comments and questions below: > The tightness of the bounds is only demonstrated through numerical experiments rather than formal theoretical analysis. Thank you for pointing this out. We have added a theoretical analysis showing that the bounds are tight for some simultaneously diagonal operator tuples that we reproduce here using the notation of the paper: **Remark 13 Appendix C** We expect the bounds of the previous Theorem to be reasonably tight. To establish a precise result in this direction it suffices to prove that the bounds describe the true behavior in special cases. Consider the case $k=1$, $n=1$ assuming $T_V,T_W$ and $f\geq g$ are nonnegative scalars with $0\leq T_W\leq T_V\leq 1$ (a similar reasoning applies to the case of simultaneously diagonal nonexpansive operator tuples of any size). For a univariate polynomial $h(X)=\sum_{j=0}^d h_jX^j$ with nonnegative coefficients we have $ |h(T_V)(f)-h(T_V)(g)| =\left(\sum_{j=0}^d h_j T_V^j \right)(f-g)\leq \sum_{j=0}^d |h_j|(f-g)$ with equality when $T_V=1$ and $|h(T_V)(f)-h(T_W)(f)| =\sum_{j=0}^d h_j\left(T_V^j-T_W^j\right)f$ = $\sum_{j=0}^d h_j \left( j v_{(j)}^{j-1} (T_V-T_W) \right) f \leq C_1(h) |T_V-T_W| f$ where the second equality follows from the intermediate value theorem (for some $v_{(j)}$ in the interval $[T_W,T_V]$). This equality shows that $C_1(h)$ is the optimal constant bound since the ratio of the left-hand side by $T_V-T_W$ approaches $C_1(h)$ as $T_V$ and $T_W$ simultaneously approach one. > The results are based on strong assumptions such as the existence of graphons induced by the graphs, the continuity of the graphons, the equispaceness of the vertex set, and the graph-tuple convergence in the operator norm. However, these assumptions may be standard in the literature. Yes, they are. The only currently known approach for thinking about transferability is by showing it is a property automatically inherited along convergent graph sequences. Graphons are only one possibility. Crucially, our results show that the comparatively weaker operator norm convergence is sufficient for transferability and this should simplify proving many other transferability results (much of the existing literature focuses on the much stronger –and thus less applicable– Hilbert Schmidt norm convergence). Furthermore we point out that the perturbation inequalities (stability results) do not require any form of convergence, nor sampling from graphons (equispaced or otherwise). These assumptions are used only for the transferability results. The assumptions we make are not the only possibility, but some assumptions will be needed because in order to establish transferability we need to define what it means to grow the size of graphs consistently (which is equivalent to saying what it means for a sequence of growing size graphs to converge to a limiting distribution). This is needed to have a notion of ground truth to be able to say “if we train a GNN on graphs with $m$ vertices and evaluate it in graphs of $n$ vertices the error is bounded by epsilon” (such a statement relies on an implicit notion of graph convergence). Here we focus on graphons and the operator norm convergence (and we justify why this convergence is preferable over the Hilbert Schmidt). However, extending our results to other notions of convergence and limiting objects could be of interest. In particular the “Stretched Graphon Model” [1] replaces the notion of graphon $W : [0, 1]^2 → [0, 1]$ by stretching its domain from $[0, 1]$ to $\mathbb R_{+}$ while preserving the integrality condition, to generate sparse graphs. The equispaced sampling in $[0,1]$ is typically replaced by a Poisson sampling in $\mathbb R$. Similarly, the Graphex model introduced in [2] relates to an alternative notion of graph convergence known as sampling convergence [3]. >The results lack support from real-world experiments. We agree with the reviewers’ comment and have added the results of our initial experiments on Recommendation Systems on real-world data in a new Final Section “Experiments with real-world data” (see global rebuttal for details). We believe this is a very interesting direction which deserves further exploration and we intend to make it the focus of future work. >Can the operator network framework be adapted to the case when the graphs have only partial overlap rather than the same set of vertices? If the union of all vertex sets is called $U$ and we have a shift operator $S$ on a subset $V$ then we can extend the matrix $S$ to a $|U|\times |U|$ matrix $\tilde{S}$ by making all the missing entries zero. In some applications with centered data (for instance in recommendation systems where the scores are centered to have mean zero) this extension by zero is harmless and leads to a satisfactory result. However it should not be used blindly since it could introduce extra unwanted zeroes (blurring the difference between a zero rating and a missing rating) >Is it possible to extend the current work to sparse graphs when there are no induced graphons? The perturbation inequality would work over any convergent sequence on the operator norm. As we mention above, even if we don’t use graphons we need a notion of graph limits in order to establish transferability. There are some attempts in the literature to define limiting objects for certain families of sparse graphs [1,2,3] and our theory might immediately apply to those. It would be an interesting research direction to carry this idea forward. [1] C Borgs, JT Chayes, H Cohn, and N Holden. Sparse exchangeable graphs and their limits via graphon processes. JMLR, 2018. [2] V Veitch and DM Roy. The class of random graphs arising from exchangeable random measures, 2015. [3] C Borgs, JT Chayes, H Cohn and V Veitch. Sampling perspectives on sparse exchangeable graphs. Ann. Prob., 2019. --- Rebuttal 2: Title: Reply to rebuttal Comment: I thank the authors for the detailed responses and agreeing to address some of the criticism. I maintain my positive score of the paper.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their thorough reading and constructive comments. We are encouraged that all three reviewers found the paper interesting. The paper’s main weakness, according to the reviews, was the lack of a more thorough empirical evaluation. To address this, we include two additional experiments, one of transferability on a synthetic dataset, and one on a real-world dataset of a movie recommendation system where information from two graphs is combined using the GtNN model. Figures 2 and 3, and references are included in the PDF. **Experiments on transferability for sparse graph tuples** We test the transferability behavior on the weighted circulant graph model from Appendix D. We are motivated by the practical setting where we aim to train a model on a small graphs and evaluate it on larger graphs. We consider a piecewise constant graphon tuple $(W_1,W_2)$ induced from the $n=300$ circulant graph tuple, and similarly we generate a piecewise constant functions by the interpolation operator $i_n$ for each data point. Next, we use this graphon and piecewise constant function as a generative model to generate deterministic weighted graphs $(G_1,G_2)$ of size $m \leq n$ as training graphs (normalized by $m$) and to generate training data by the sampling operator $p_m$. Since $||T_{W_j}-T_{\hat{G}_j}|| _{\rm op} \to 0$ as $m\to n$, according to Theorem 7 the transferability error goes to $0$ too. To demonstrate this, we train five different models, trained with graphs tuples of fixed size $m=100,150,200,250,300$ (respectively) and compare the performance of the testing data with $n=300$. Figure 2 shows that the best testing MSE decreases as the training size $m$ approaches $n$ for the GtNN, which shows transferability holds for sparse graph tuples. For the stable GtNN, the general trend of the testing MSE curves also indicates transferability. In addition, the performance comparison between GtNN and stable GtNN for $m=100$ shows that our stability constraint improves the transferability by reducing the best testing MSE. However, this improvement only appears for the $m=100$ case. All the other cases have worse performance for the stable GtNN. We conjecture this is because the stability constraint makes the training process take a longer time to converge, and whenever it hits the constraint boundaries the MSE jumps, which also makes it harder to converge to a local minimum. It will be interesting to see if other learning algorithms or penalty functions for the stability constraints help improve the performance. **Experiments on real-world data from a movie recommendation system** Finally, we present results on the behavior of graph-tuple neural filters on real data as a tool for building a movie recommendation system. We use the publicly available MovieLens 100k database, a collection of movie ratings given by a set of 1000 users [1] to 1700 movies. Our objective is to interpolate ratings among users: starting from the ratings given by a small set of users to a certain movie, we wish to predict the rating given to this movie by all the remaining users. Following [2] we center the data (by removing the mean rating of each user from all its ratings) and try to learn a \emph{deviation from the mean rating function}. More precisely, letting $U$ be the set of users, we wish to learn the map $\phi:\mathbb R[U]\rightarrow\mathbb R[U]$ which, given a partial deviation from the mean ratings function $f:U\rightarrow \{1,2,\dots,5\}$ (with missing data marked as zero) produces the full rating function $\hat{f}=\phi(f)$ where $f(u)$ contains the deviation of the mean ratings for user $u$. The classical Collaborative filtering approach to this problem consists of computing the empirical correlation matrix $B$ among users via their rating vectors. A more recent approach [2] defines a shift operator $S$ on the set of users by sparsifying $B$. More precisely we connect two users whenever their pairwise correlation is among the $k$ highest for both and then approximate $\phi$ as a linear filter or more generally a GNN evaluated on $S$. Although typically superior to collaborative filtering, this approach has a fundamentally ambiguous step: How to select the integer $k$? To the best of our knowledge, there is no principled answer to this question so we propose considering several values simultaneously, defining a tuple of shift operators, and trying to learn $\phi$ via graph-tuple neural networks on $\mathbb R[U]$. More specifically we compute two shift operators $T_1,T_2$ by connecting each user to the $10$ and $15$ most correlated other users respectively, and compare the performance of the GtNN on the tuple $(T_1,T_2)$ (2ONN) with the best between the GNNs on each of the individual operators $T_1$ and $T_2$ (GNN). To make the comparison fair we select the degrees of the allowed polynomials so that all models have the same number of trainable parameters (seven). Figure 3 (left) shows that an appropriately regularized Graph-tuple network significantly outperforms all other models at any point throughout the first $500$ iterations (the minimum occurs when the training error stops improving significantly). However, if the model is over-trained as in the right plot of Figure 3 then it can suffer from a vanishing gradients limitation which may lead to a trained model worse than the best one obtained from standard graph filters. This example suggests that graph-tuple neural networks are of potential relevance to applications. Pdf: /pdf/b5b1c0b315620e60893f4acd00456919e2abc5cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation
Accept (poster)
Summary: The paper proposes a framework to generate animated surfaces from input videos. It has a static step to generate a base triangle mesh, and textures represented by 3D Gaussians. A subsequent dynamic step changes the positions of the mesh vertices and the gaussians. The framework leverages on Zero123 and SuGaR to generate the mesh and gaussians, integrating loss terms from those approaches in its training mechanism. Tha dynamic step uses geodesic distances to create a deformation graph and include As-Rigid-As-Possible and Normal Consistency loss terms. Strengths: (1) This paper is very informative and well contextualized. I am a specialist in geometry but not in Diffusion models, but I was able to use the Introduction and Related Work sections to familiarize myself with the theme. That was very fun. There are lots of the references from very recent papers, so good job on keeping track of the field! (2) The results are probably ready for current Computer Graphics pipelines, since it uses meshes and skinning. (3) I liked the mathematical modeling. Explaining everything in terms of the losses made a good paper structure. I have reservations about the specific mathematical notation (see Weaknesses), but reading was good overall. (4) I liked the proposed method. All losses terms make sense and are intuitive. Knowing that the SuGaR SDS would work well in a more complex pipeline was informative. (5) I liked the use of geodesic distance to compute the deformation graph. It is more robust than euclidean distance indeed. Weaknesses: (1) I missed a video in the supplementary material. As a 4D approach, the paper would benefit greatly from showing one. It shows metrics about temporal coherence, but a video would avoid any possible doubt. (2) I'm missing references about 4D neural implicit surfaces in the 4D Representations paragraph in the Related Work section. They also deal with dynamic surfaces in neural pipelines. I believe a sentence including all of them would be sufficient. Novello, Tiago, et al. "Neural Implicit Surface Evolution." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. Guandao Yang, Serge Belongie, Bharath Hariharan, and Vladlen Koltun. Geometry processing with neural fields. Advances in Neural Information Processing Systems, 34, 2021. Mehta, Ishit, Manmohan Chandraker, and Ravi Ramamoorthi. "A level set theory for neural implicit evolution under explicit flows." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. (3) The mathematical notation is a little bit polluted, impacting reading in my opinion. Subscripts and superscripts are used too much. The text would be cleaner by using more symbol letters instead. A good rule of thumb is to avoid using subscripts and superscripts unless they represent indices. The same letters are also used for very different contexts. N is used for number of vertices and control point set, for example. Even though different fonts are used in each context, the cognition when reading is to remember N and forget the font used. It would be cleaner to use different symbols. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) I would like to know more about the temporal coherence in the proposed approach, since a video is not available. Even though the paper show metrics in that respect I would also like to see qualitative results. (2) Even though the results are probably ready for a CG pipeline, I believe the training and inference must be computationally expensive. I would like to know the training and inference times, and the hardware used. (3) Does only changing positions, rotations, and scaling of the gaussians in the deformation step result in accurate view-dependent effects? In theory the spherical harmonics should also be recalculated. (4) Why 6 gaussians per triangle in the static stage? (5) Equation 10 should come with a small explanation to give intuition. Mentioning that the logarithmic rotation is a mapping to the Lie Algebra and that the exp is a conversion back to the rotation matrix form would at least give a direction for a reader that is not a specialist. (6) I believe the references in line 38 [29, 54, 17, 41, 9] are not associated directly with score distillation sampling. Is that a typo? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the paper has a specific section describing limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors are grateful for the insightful feedback and support from the reviewer. Below we address the mentioned concerns separately. ### **Q1: Video qualitative results.** **A**: In the first paragraph of appendix, we have provided the link to our anonymous project page, which contains the input video and our rendered video results as well as our blender demo. Please check it. ### **Q4: Concerns about computation cost of training and inference** **A**: For inference, which is the process of computing the deformed 3D object under a given timestamp, we report the time consuming of different methods as below: ||Consistent4D|DreamGaussian4D|4DGen|STAG4D|Ours| |:---:|:---:|:---:|:---:|:---:|:---:| |Inference Time|51ms|3ms|3ms|3ms|3ms(MLP)+11ms(Skinning)| Among these methods, Consistent4D utilizes implicit representation and volume rendering, leading to much more query points for the deformation network and hence the longest inference time (51ms). Compared to those Gaussian-based methods, DreamGaussian4D, 4DGen and STAG4D, our method has a little bit slower inference time (14ms). The extra computation overhead comes from the skinning calculation, which is implemented with pure PyTorch. This overhead can be eliminated through some engineering effort of implementing it as a CUDA extension, and we leave it for future works. As for the computation cost of training, we have reported the overall training time and GPU memory usage in our global rebuttal. Compared to other methods, our method is the most computational efficient considering both the time and memory consuming (0.8h and 8GB). ### **Q5: Concerns on updating spherical harmonics.** **A**: Following previous 3D generation works on 3D Gaussians, we use 0-degree spherical harmonics which is actually equivalent to RGB color without view-dependent effects. Hence we do not compute the deformation of spherical harmonics. We would explore to use higher level SH-coefficients for view-dependent effects in future. ### **Q6: Why 6 Gaussians per triangle in the static stage?** **A**: By compromising on the appearance representation capability and computational cost, we choose 6 Gaussians per triangle. More Gaussians would enable a better capability to represent the surface appearance, with an expense of more computational cost. As shown in Figure 2 and Table 3 of the supplemented PDF rebuttal, the experimental results demonstrate 6-Gaussians can already deliver satisfying performance by considering the rendering quality and training time. It is worth mentioning that, even with only 1 Gaussians per face, our method still largely outperforms previous methods with the fastest training time. ### **Q2\&Q3\&Q7\&Q8: Concerns about writing.** **A**: We really appreciate your advice on the paper writing, and we will revise the manuscript according to the constructive suggestions. We will include the works on 4D neural implicit surfaces in the 4D representations paragraph in the Related Work section. All mathematical notations in the paper will be carefully simplified for easier reading. And we will supplement a proper explanation about Equation 10 as you mentioned in Question (5). Finally, the references in line 38 are indeed a typo, and the words before these references should be "differentiable 3D representations" rather than "3D generation". --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I missed the link for the result videos at the appendix, and I believe they are good enough. There are minor artifacts in the normals, but my concerns about temporal coherence are addressed. I think the paper is very well written and deserves publication, specially for the good contextualization, clean presentation of the theory, and the coherent mathematical treatment of the problem. It would be a very good reference for someone not so familiar with the field. --- Rebuttal 2: Comment: The authors are grateful for your support! We will further polish our method in the final version.
Summary: DreamMesh4D is a novel framework for transforming static images into 4D dynamic meshes. It refines object geometry and texture using a Gaussian-mesh hybrid and a geodesic-based deformation graph. A new skinning algorithm combines the best of LBS and DQS for enhanced deformation. The method excels in creating high-quality 4D objects with improved rendering and consistency. Strengths: 1.The results show an improvement in both quantitative and qualitative metrics compared to previous methods. 2.The use of points to control motion is an interesting approach. 3.The writing is clear. Weaknesses: 1.The novelty of this work is somewhat limited in comparison to previous methods. The training strategy, the structure of the loss function, and the design of deformation are quite similar. The main difference lies in the representation, which is a predictable technical combination. 2.I find that the detail of the results is superior to those of previous methods. Therefore, in section 4.2, the author should elucidate which aspects of the pipeline design have contributed to the improved quality compared to the previous method's design. 3.The paper argues that previous works have significant time and memory complexity, but there is a lack of comparison regarding time and GPU usage costs. It would be beneficial to include such a comparison to demonstrate the efficiency of the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: The current article lacks interesting insights. I suggest that the author should focus more on the advantages of this representation and explain why it is important for the 4D generation task. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 1 Limitations: Refer to the Weaknesses and Questions section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors are grateful for the valuable and in-depth feedback of the reviewer. Below we address the mentioned concerns separately. ### **Q1: Limited novelty compared to previous methods.** **A**: Previous methods for monocular video-to-4D generation include Consistent4D, DreamGaussian4D, 4DGen and STAG4D. All of them directly employ existing dynamic neural radiance fields or 3D Gaussian Splatting as their base representations, and predict query points' deformation with a MLP network. In contrast, we are the first to exploit mesh as the underlying representation for the problem and demonstrate superior effectiveness compared to prior methods. Although it might be predictable, no prior work has really tried and demonstrated the possibility of the idea for this challenging task. As both reviewer 1 and reviewer 3 also agree on, the method to generate dynamic meshes is new and informative. The authors humbly think the proposed method would provide valuable insight for following works. ### **Q2: Which aspects of the pipeline have contributed to the quality improvement compared to previous methods?** **A**: The main aspect contributing to the quality improvement is the employment of mesh representation. 3D mesh models surface explicitly and is crucial for appearance modeling and animation. To validate the claim, we further conduct a thorough ablation study by replacing the mesh representation with no surface bound 3D Gaussian Splatting, and leave other parts unchanged. The experimental results demonstrate the mesh representation achieves a PSNR 36.73 dB compared to 28.71 dB with 3D-GS. It demonstrates the superior advantage by exploiting surface bound Gaussian Splatting for this challenging less constrained optimization problem. Detailed quantitative and qualitative evaluation results can be found in the rebuttal PDF (i.e. Table 2 and Figure 1). ### **Q3: Concerns about computation cost.** **A**: The detailed comparisons against prior works in terms of time and GPU memory usage can be found in the global rebuttal section. We evaluate all the methods on a single NVIDIA RTX 6000Ada graphic card since Consistent4D cannot run on a 24G GPU. Our method requires 0.8h and 8G memory for training, which is superior compared to Consistent4D, DreamGaussian4D, 4DGen and STAG4D. For example, Consistent4D requires 2h and 28G memory for training, STAG4D requires 1.6h and 7G memory for training. ### **Q4: Insights of this work.** **A**: The 4D generation task is inherently not a well constrained problem, usually only a sequence of monocular video is given. Additional constrains should thus be enforced to achieve better performance. Prior works usually rely on a pre-trained image diffusion model to provide extra constrain in the form of score distillation loss. Although they can deliver impressive performance, the quality of the generated 4D asset still has room for improvement, e.g. the asset usually exhibits blurry appearance or unsatisfactory geometry as shown in our experimental results. The authors suspect the performance gap might be caused by the used loosely constrained 4D representation, i.e. NeRF or 3D-GS with an additional deformation network for dynamic modeling, which is commonly used for multi-view 4D reconstruction problem. Since multi-view cues are usually sufficient to constrain the problem, prior commonly used 4D representations would thus still be able to deliver good performance. However, it is not the case for 4D generation from a monocular video, which provides much less multi-view cues. Therefore, the authors thought additional constraint should be enforced to better constrain this more challenging problem, which motivates us to explore the usage of the Gaussian-mesh hybrid representation. Gaussian-mesh representation was firstly proposed in SuGaR and used for triangular mesh extraction from a pretrained 3D-GS model. It enforces 3D Gaussians to bind on the object surface and degenerate towards 2D Gaussians. This kind of enforcement reduces the parameter searching space of the optimization problem, and thus could deliver better performance. The experimental results also prove our hypothesis and demonstrate superior performance compared to prior works. However, it is not trivial to simply integrate Gaussian-mesh representation into a 4D pipeline. For example, 1) how to efficiently and effectively deform the mesh-Gaussian representation for 4D dynamics? 2) how to make sure the surface bound Gaussians are still able to render high-quality textures under deformation? It is those challenges motivating us: 1) to propose the new skinning algorithm combines the best of LBS and DQS for this particular problem; 2) to propose the deformation of surface Gaussian attributes; 3) the usage of deformation graph to provide additional geometric deformation constraints. It is all those factors and insights leading to the final formulation and superior performance of our method. Furthermore, mesh based representation is already very mature for modern CG pipelines. Generated 4D assets with mesh representation and skinning method can be conveniently exported to existing commercial pipelines, which further motivates us to exploit this representation. The authors humbly think our work would be valuable to the community and provide good insight for following works in relevant fields. --- Rebuttal Comment 1.1: Comment: 1. I also missed the video results like Reviewer 2HE6,because I previously believed that links were not allowed. Though I think the video details are good. However, I am concerned about why the orbital view videos are not shown, as I can't judge the spatial consistency. 2.My concerns are quite similar to those of Reviewer 7V8W. In Q3: Innovative Contribution of this work under 7V8W, I find the author summarized two contributions. Regarding the changing of Gaussian attributes during deformation, I think it is a basic strategy. In 4DGS (CVPR 2024), the positions, rotations, and scalings are also deformed. Regarding the new skinning algorithm, in the ablation study in the main paper, the evaluation metric did not significantly exceed DQS and LBS. So it is not the key. In the rebuttal PDF, I think the ablation about GS and hyper is important.I admit that the representation's blur problems are mitigated, and I believe the improvement contributes to the changes in representation, but the technology is existing. Therefore, I think it is a not bad paper, but not an excellent one. So I do not rate it above "boardline accept." --- Reply to Comment 1.1.1: Comment: Thanks for the reviewer's efforts and valuable comments. ### **Q: Missing orbital view videos** **A**: We originally built the anonymous project page following a prior work, i.e. Consistent4D (ICLR2024), that shows the reference view for the demonstration of reconsturction quality and a novel view for that of spatial consistency. And we admit that it would be better to present more novel views in a nice page arrangement. We will improve it. As a supplement, we have also provided the video of our composite scene demo in the same project page (i.e. at bottom). The demo video contains surrounding views of the generated dynamic objects, demenstrating their high spatial consistency, as also agreed by reviewer *2HE6*. ### **Q: Similar with 4DGS (CVPR2024) on Gaussian attributes deformation** **A**: We carefully checked the implementation of 4DGS (CVPR2024). It takes a different approach as ours. In particular, they use a deformation network to directly predict the delta update of each Gaussian's attributes (i.e. position, rotation, scaling etc.). In contrast, we predict the deformation gradients (i.e. transformation matrix) of the control nodes in the deformation graph via an MLP network, the Gaussian attributes are then computed from the predicted node deformation via skinning algorithm in an analytical manner. ### **Q: Utilizing of existing 3D representation technology** **A**: The authors humbly think the main contributions of the work can be summarized to: 1) the analysis and findings of which prevent existing methods from delivering better performance, as discussed in the reply for "Q4: Insights of this work"; 2) based on these analysis and insights, we for the first time demonstrate the possibility and superior advantages of extending Gaussian-mesh representation for 4D tasks, in comparison to prior methods. Although Gaussian-mesh hybrid representation is an existing technology for static scene, no one has really tried to extend it for 4D representation. There is a fact should not be ignored that almost all 4D representations are developed from existing 3D representations. The authors humbly think *a new use of an old method can be novel if nobody ever thought to use it this way for a new problem, and it really can deliver superior performance*.
Summary: This paper proposes DreamMesh4D that combines mesh representation with sparse-controlled deformation technique to generate high-quality 4D objects from a monocular video. The authors bind Gaussian splats to the surface of the triangular mesh for differentiable optimization of both the texture and mesh vertices. The method begins with a coarse mesh from single image based 3D generation. Sparse points are then uniformly sampled on surface of the mesh to build a deformation graph, which drives the motion of the 3D object. For each step, transformations of sparse control points are predicted using a deformation network. The mesh vertices and the bound Gaussians are deformed via a geometric skinning algorithm that combines LBS and DQS. Reference view photometric loss, score distillation loss, and regularization losses are used in a two-stage learning. Experiments are performed on the consistent4D dataset. Strengths: - The methods address a challenging problem to generate dynamic objects from a monocular video. - A new method is proposed that generates dynamic meshes through a static-to-dynamic optimization process. By employing a Gaussian-mesh hybrid representation, the authors simultaneously refine both the geometry and texture of the object. This approach allows the static object to serve as an excellent starting point for dynamic learning. During the dynamic stage, a deformation graph is used on the object’s surface using geodesic distance. - Experiments show the superior performance of this method in generating high-fidelity 4D objects. - The method use dynamic mesh with good compatibility with modern geometric pipelines in the 3D gaming and film industries. Weaknesses: - One of the main idea of this paper is to use a Gaussian-mesh hybrid representation, which was originally proposed in the work of SuGaR. The technical difference that is designed to address the special setting of this paper’s task is not clearly explained. - In figure 1, the input is a composited scene video. But in the rest of the paper, the inputs are well segmented object video, which makes the setting of the addressed task confusing. - The method combines some existing methods such as SuGaR, deformation graph, LBS and DQS based skinning. It’s not clear to identify the author’s innovative contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. One of the main idea of this paper is to use a Gaussian-mesh hybrid representation, what is the difference between the proposed method and SuGaR? 2. In figure 1, the input is a composited scene video. Is a segmentation method needed to extract each object region to use as input to the method? There seems no mention on this in the rest of the paper. How is the mutual-occlusion between the objects solved by the proposed method for the scenario in figure 1? 3. What is the computational cost of the proposed method? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors are grateful for the thoughtful feedback and valuable questions. Thanks for the support for this work! Below we will address the questions and concerns separately. ### **Q1: The difference between the proposed method and SuGaR.** **A**: While SuGaR proposes a pipeline reconstructing static surface via a Gaussian-mesh hybrid representation from pre-trained 3D Gaussians, our work mainly exploit the proposed hybrid representation as our base representation, and focus on extending it for video-to-4D generation task, similarly as most prior works which rely on NeRF or 3D-GS as the base representation. Although we utilize the static 3D representation proposed by SuGaR, trivially applying it to 4D tasks brings challenges. For example: 1) how to effectively preserve the Gaussian appearance while the asset undergoes deformation (i.e. surface Gaussian attributes update during deformation); 2) how to efficiently and effectively deform the vertices of the mesh-Gaussian representation (i.e. deformation graph and hybrid skinning). ### **Q2: Concerns about Figure 1.** **A**: The input of our method is a monocular video of the target object under a fixed view, which is consistent with the setup used in previous works. As for the composite scene video in Figure 1, it is a demo created by ourselves in Blender using our generated assets. It is to demonstrate that the generated 4D assets by our method can be directly integrated into downstream graphic engines, which further prove the advantage of our work. We will make this more clear in the final version. ### **Q3: Innovative contribution of this work.** **A**: The innovative contribution of our work is that we for the first time demonstrate the possibility and superior advantages of extending Gaussian-mesh representation for 4D tasks, in comparison to prior works, which mainly exploit NeRF or 3D-GS as the underlying representation. As R3 also points out that it is informative to the community that SuGaR works well in a more complex 4D pipeline. Although SuGaR, deformation graph, LBS and DQS are existing techniques, trivially integrating them would not lead to optimal performance. We therefore 1) proposed to change the Gaussian attributes during deformation; 2) proposed a novel skinning algorithm suitable for such optimization-based scenarios with insufficient constrains. The experimental results demonstrate the effectiveness of our design and our method achieves superior performance in comparison to prior baseline methods. ### **Q4: Computation cost of the proposed method.** **A**: The time and memory complexities are 0.8h and 8G for training on a single NVIDIA RTX 6000Ada graphic card, which is also superior compared to prior methods and further demonstrates the advantage of our formulation. The detailed metrics in comparison to other methods can be found in the global rebuttal. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for providing the detailed explanations. For my question 2 for Figure 1, as I guessed, it is a demo created by the authors using their generated assets. I think this should be clearly explained in final version of the paper, because as the current Figure 1, it is easy to misunderstand that the composited scene demo is the input video. For my concerns about the innovation of the paper, the authors explained what are new compared to a straight-forward combination of SuGaR, deformation graph, LBS and DQS. The clarifications are reasonable and are meaningful to work with a pipeline to yield dynamic mesh with good compatibility with modern graphics. In regards of the above, I'd tend to keep my previous positive rating. --- Reply to Comment 1.1.1: Comment: We are pleased to hear that our rebuttal addressed your concerns well, and we are grateful for your efforts and time in reviewing. If you have any further concerns, questions, or suggestions, please do not hesitate to let us know.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their valuable feedback and agree that: * we address a challenging problem to generate dynamic objects from a monocular video; * the method to generate dynamic meshes through a static-to-dynamic optimization process is new, and the losses terms make sense and are intuitive; It was also informative that the SuGaR SDS would work well in a complex pipeline of 4D task; * the usage of mesh and skinning has good compatibility with modern computer graphic pipelines in the 3D gaming and film industries; * the experimental results show the superior performance of our method in generating high-fidelity 4D objects compared to previous methods; * the paper is clearly written, very informative, well contextualized, in good structure and has a good track of the prior methods. In the following, we will first report both the time and memory complexities of our method against prior works requested by all reviewers, and then address individual reviewer's concerns respectively. All the responses will be incorporated into the final version. ## Comparisons on time and memory complexities. ||Consistent4D|DreamGaussian4D|4DGen|STAG4D|Ours| |:---:|:---:|:---:|:---:|:---:|:---:| |Training Time|2.0h|0.6h|3.0h|1.6h|0.8h| |Memory|28GB|20GB|15GB|7GB|8GB| The above table shows the computation cost of our method compared to prior works. Our method is efficient in terms of both training time and memory consumption. In particular, our method requires 0.8h and 8G memory during training, which is more efficient than Consistent4D and 4DGen. Although DreamGaussian4D requires 0.6h to train the model, they require much more memory (i.e. 20G). STAG4D requires similar level of memory consumption, however they requires two times more training time than ours. Since Consistent4D cannot run on GPUs with 24GB of memory, we train all methods on a single NVIDIA RTX 6000Ada graphic card with 48GB of memory for fair complexity comparisons. The results further demonstrate the advantage of our method compared to previous works. Pdf: /pdf/89f2089e813521614ba630002ae8475a4bdb6d15.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Beating Adversarial Low-Rank MDPs with Unknown Transition and Bandit Feedback
Accept (poster)
Summary: This paper studies online learning in low-rank MDPs with adversarial losses (having linear structure), and derives a set of regret guarantees: 1. A model-based and inefficient algorithm ensures $T^{2/3}$ regret, assuming the loss feature vector is unknown. The algorithm has a two-stage design: the first stage is an initial reward-free exploration stage from [1] to ensure low estimation error for the second (exploitation) stage. 2. A model-free and oracle-efficient algorithm ensures $T^{5/6}$ regret, assuming the loss feature vector is unknown. This is thanks to recent advances in exploration for low-rank MDP by [2]. 3. A model-free and oracle-efficient algorithm ensures $T^{5/6}$ regret against adaptive adversaries, assuming the loss feature vector is known, by leveraging a representation learning algorithm from [2]. Strengths: By adopting recent progress in representation learning and low-rank MDP from the literature, the paper provides a set of results for low-rank MDPs with adversarial (linear) losses under bandit feedback. This is the first set of results for the more challenging bandit feedback. Weaknesses: I overall appreciate the contributions from this work, in which the authors take advantage of SOTA techniques in the literature, but also with some original adaptations, to give a complete picture on regret bounds in a new setup. However, I think the author could do a better job in presentation and delivering insights, which may make readers benefit more from reading this paper. Below are some questions that I think should be made clear(er) in the paper (and I don’t get an answer after reading the paper): 1. Maybe I missed it somewhere, but what is exactly the “oracle” that is referred to throughout this paper? Is it hidden in the VoX algorithm? 2. Even with linear loss struction, the regret still has to be $\Omega(\sqrt{A})$, is this a common result in the low-rank MDP literature, or unique due to the adversarial losses (and bandit feedback)? 3. In Thm. 4.1, is $\delta$ the failure probability for the regret guarantee? In other words, is Thm. 4.1 stating a high-probability regret bound? If so, why is the way of stating a high-prob. bound in Thm. 4.1 different from that in Thm. 5.1? If not, then the use of notation $\delta$ is not consistent in the paper. 4. What’s the difficulty of obtaining regret bounds against adaptive adversaries, without knowing the loss feature map? In section 5, the sentence “Given the difficulty of this setting, we make the additional assumption” fails to yield a smooth transition, as I don’t get the connection between the difficulty (it’s not even clear what it is) and the need for the assumption. 5. Does the difficulty lie in obtaining high-prob regrets against oblivious adversaries? Is it true that with the help of the known loss feature, one can establish high-prob. regret bound, which easily implies regret bounds against adaptive adversaries? 6. If algorithm 4 is mostly from other papers and there’s no novel modification, the authors may consider moving it to appendix to save some space in the main body for more informative messages. 7. In line 10 of algorithm 1, “Define $\sum_h^t = \sum$”, anything missing before the $=$? 8. Sketched proofs in the main body for deriving regret bounds would be very helpful, but I fully understand it due to the page limit. 9. If a reference has been accepted to a conference, the authors may want to cite the conference version rather than arXiv, unless there's a significant update. I will consider increasing my rating to 6 and supporting acceptance if the authors can address these questions so that I can have a clearer understanding of the results. Technical Quality: 3 Clarity: 2 Questions for Authors: Questions here are mainly for curiosity and will not affect my recommendation/rating much. 1. Typically, such two-stage design (initial exploration for warm-up + standard online learning) may not give rate-optimal guarantee. In adversarial linear MDPs, [3] performs estimation on-the-fly to get the first rate-optimal regret. I was wondering what would be the difficulty of doing similar things here (and enjoying better regrets). Could the authors point out some other places where the sub-optimality comes from? 2. In Thm. 4.1, the regret scales with the size of the function class. I think it makes some sense but any similar results in the literature? References [1] Cheng, Yuan, Ruiquan Huang, Yingbin Liang, and Jing Yang. "Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs." In The Eleventh International Conference on Learning Representations. 2022. [2] Mhammedi, Zakaria, Adam Block, Dylan J. Foster, and Alexander Rakhlin. "Efficient Model-Free Exploration in Low-Rank MDPs." arXiv preprint arXiv:2307.03997 (2023). [3] Liu, Haolin, Chen-Yu Wei, and Julian Zimmert. "Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback." In The Twelfth International Conference on Learning Representations. 2023. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: In the checklist, it says "The paper discuss the limitations of the work in the discussion section", but I didn't find a section named "discussion". ====== update after author-reviewer discussion ====== During the initial rebuttal and the discussion phase, the authors addressed my clarification question, so as of now I increase my score from 5 to 6 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support and valuable feedback. We will adopt your writing suggestions in our future version. Your questions about $poly(A)$ dependence in regrets and the difficulty of adaptive adversaries are explained in **Q1** and **Q3** of the global response. We answer your other questions below. **Q1**: *What is exactly the “oracle” that is referred to throughout this paper? Is it hidden in the VoX algorithm?* **A**: Yes, the oracle is hidden in VoX. A call to VoX requires a polynomial number of calls to a min-max optimization Oracle over a class of value functions to learn a good representation; the Oracle is required by the RepLearn subroutine within VoX. We note that this Oracle is by now standard and has been used in many other works such as [Mhammedi et al., 2023; Zhang et al.,2022b]. **Q2**: *In Thm. 4.1, is $\delta$ the failure probability for the regret guarantee? In other words, is Thm. 4.1 stating a high-probability regret bound?* **A**: Thank you for pointing it out. This is just a typo in the statement of the theorem. The result in Theorem 4.1 is in expectation and $\delta$ should be replaced by $poly(1/T)$. **Q3**: *Does the difficulty of adaptive adversaries lie in obtaining high-prob regrets against oblivious adversaries? Is it true that with the help of the known loss feature, one can establish high-prob. regret bound, which easily implies regret bounds against adaptive adversaries?* **A**: The situation is a little more subtle than this. Even with a known feature loss, Algorithm 2 would still fail against an adaptive adversary because the estimation of the average $Q$-functions within an epoch can no longer be reduced to a standard least-squares regression problem with i.i.d.~data; see the answer to **Q3** in the general response. Instead, the algorithm in Section 5 aims at estimating the average $Q$-functions, not in a least-squares sense, but in expectation over roll-ins using policies in $\Psi_{\texttt{span}}$; the ``in-expectation'' estimation task is in a sense easier. Here, $\Psi_{\texttt{span}}$ is a set of policies with good coverage over state-action pairs. More specifically, $\Psi_{\texttt{span}}$ is required to be such that $\\{ \mathbb{E}^{\pi}[\phi^{\texttt{loss}}(x_h, a_h)] | \pi \in \Psi_{\texttt{span}} \\}$ essentially covers all directions in $\mathbb{R}^d$. For this reason, we need $\phi_{\texttt{loss}}$ to compute $\Psi_\texttt{span}$. **Q4**: *In line 10 of algorithm 1, Define $\Sigma_h^t = \sum$, anything missing before the $=$ ?* **A**: $\Sigma_h^t$ is the covariance matrix of time $t$ and step $h$ whose definition is on the right-hand side of ``$=$''. The $\sum$ on the right-hand side is the summation notation. **Q5**: *In adversarial linear MDPs, [3] performs estimation on-the-fly to get the first rate-optimal regret. I was wondering what would be the difficulty of doing similar things here (and enjoying better regrets). Could the authors point out some other places where the sub-optimality comes from?* **A**: Indeed, the suboptimal regret of Algorithm 1 is due to the two-stage design. The online occupancy estimation technique in [3] relies crucially on the transition feature being known. It is unclear how to extend this to the low-rank setting with unknown features. Getting the optimal regret is still a challenging open problem. **Q6**: *If a reference has been accepted to a conference, the authors may want to cite the conference version rather than arXiv, unless there's a significant update.* **A**: Thank you for the suggestion. For the case of [Mhammedi et al., 2023] in particular, the paper has significant updates on arXiv compared to their camera-ready NeurIPS version (removes the reachability assumption with different analysis). Thus, we cite both versions in our paper. We will change other references to the conference version. **Q7**: *In Thm. 4.1, the regret scales with the size of the function class. I think it makes some sense but any similar results in the literature?* **A**: For low-rank MDP, the $\log(|\Phi|)$ in Theorem 4.1 matches the best-known result in the stochastic setting [Mhammedi et al., 2023; Zhang et al., 2022b]. Most papers on low-rank MDP have worse dependence $\log(|\Phi||\Upsilon|)$ under the model-based assumption (e.g. [Agarwal et al., 2020; Uehara et al., 2022]). Moreover, the logarithmic dependence on the size of the function class is very common in RL with general function approximation. For example, in the general decision estimation coefficient (DEC) framework, such dependence appears in both model-based (Theorem 3.3 in [Foster et al., 2021]) and model-free (Theorem 2.1/Corollary 2.1 in [Foster et al., 2023]) algorithms. **References**: [Mhammedi et al., 2023] Mhammedi, Z., Block, A., Foster, D. J., and Rakhlin, A. (2023). Efficient model-free exploration in low-rank mdps. arXiv preprint arXiv:2307.03997 [Zhang et al., 2022b] Zhang, X., Song, Y., Uehara, M., Wang, M., Agarwal, A., and Sun, W. (2022b). Efficient reinforcement learning in block mdps: A model-free representation learning approach. ICML. [Agarwal et al., 2020] Agarwal, A., Kakade, S., Krishnamurthy, A., and Sun, W. (2020). Flambe: Structural complexity and representation learning of low rank mdps. NeurIPS. [Uehara et al., 2022] Uehara, M., Zhang, X., and Sun, W. (2022). Representation learning for online and offline rl in low-rank mdps. ICLR. [Foster et al., 2021] Foster, D. J., Kakade, S. M., Qian, J., Rakhlin, A. (2021). The statistical complexity of interactive decision making. arXiv preprint arXiv:2112.13487. [Foster et al., 2023] Foster, D. J., Golowich, N., Qian, J., Rakhlin, A., Sekhari, A. (2023). Model-free reinforcement learning with the decision-estimation coefficient. NIPS. --- Rebuttal Comment 1.1: Title: Regarding Q2 and Thm. 4.1 Comment: I thank the authors for the response. How Thm 4.1 should be corrected is still unclear to me from the response. Could the authors state the corrected Thm 4.1 in this thread? Also, what is the role of $\\delta$ (confidence parameter) here? What happens if we make it very small (close to 0) or large (close to 1)? --- Rebuttal 2: Title: Statement of Theorem 4.1 Comment: Thank you for your comments. The statement of Theorem 4.1 should read as follows: **Theorem 4.1.** *Let $\delta\in(0,\frac{1}{T})$ be given and suppose Assumption 2.1. and Assumption 2.2. hold. Then, for $T =\text{poly}(A,H,d, \log(|\Phi|))$ sufficiently large, Algorithm 2 guarantees $Reg_T \leq \text{poly}(A,H,d, \log(|\Phi|T)) \cdot T^{5/6}$ against an oblivious adversary, where $Reg_T :=E_{\rho^{1:T}}[\sum_{t=1}^T E_{\pi^t\sim \rho^t}[V^{\pi^t}(x_1;\ell^t)] -\sum_{t=1}^T V^{\pi}(x_1;\ell^t) ]$.* Note that $Reg_T$ in this statement is a deterministic quantity; there is no randomness. The $\delta$ in the submission was just a typo. We now explain why there is no $\delta$ in the statement above even though the analysis of Theorem 4.1 (in particular Lemma G.4 and G.5) have a $\delta$. The current analysis of Algorithm 2 implies that under the setting of Theorem 4.1, we have that for any $\delta\in(0,1)$, there is an event $\mathcal{E}(\delta)$ with probability at least $1-\delta$ under which $\widehat{Reg}\_T \leq \text{poly}(A,H,d, \log(|\Phi|/\delta)) \cdot T^{5/6}$, where $\widehat{Reg}\_{T} := \sum\_{t=1}^T E_{\pi^t \sim \rho^t}[V^{\pi^t}(x\_1;\ell^t)] -\sum\_{t=1}^T V^{\pi}(x\_1;\ell^t);$ Note that the randomness here comes from $\rho^{1:T}$. Now, since $Reg_T = E_{\rho^{1:T}}[\widehat{Reg}_T]$ and $\widehat{Reg}_T \leq H T$, we have $Reg\_T =P[\mathcal{E}(1/T) ] E\_{\rho^{1:T}}\left[\widehat{Reg}\_T\mid \mathcal{E}(1/T) \right]+ P[\mathcal{E}(1/T)^c ] E\_{\rho^{1:T}}\left[\widehat{Reg}\_T\mid \mathcal{E}(1/T)^c \right]\leq \text{poly}(A,H,d, \log(|\Phi|/\delta)) \cdot T^{5/6} + H,$ where we used that $P[ \mathcal{E}(1/T)^c]\leq 1/T$ and the bound on $\widehat{Reg}_T$ under $\mathcal{E}(1/T)$. Note that the high probability bound on $\widehat{Reg}_T$ above is still not good enough for an adaptive adversary; see the answer to Q3. We hope this clarifies things. Please let us know if you have any more questions. --- Rebuttal Comment 2.1: Comment: I thank the authors for stating the corrected theorem and walking me through the proof sketch. I now understand that Theorem 4.1 is meant to state an in-expectation upper bound only, and $\\delta$ is just the probability with which some good event doesn't hold. I don't have any other questions as of now. As I promised, I am increasing my score from 5 to 6.
Summary: This paper studies adversarial Low-Rank MDPs with unknown transition and bandit feedback. The authors give three main results, targeting either tighter regret or computational efficiency. The authors show that the linear structure of the reward function is necessary for the case of bandit feedback to achieve regret without dependence on the number of states. Strengths: 1. The paper addresses a novel and challenging problem of adversarial Low-Rank MDPs with unknown transition and bandit feedback. The problem is well-motivated and has practical implications in real-world applications. 2. The theoretical results are strong and the analyses are sound. The authors provide a comprehensive analysis of the proposed algorithms and prove the theoretical guarantees. The authors combine several techniques to handle the unknown transition and bandit feedback, which is non-trivial. 3. The paper provides a lower bound to show we could not gain too much from a low-rank transition structure compared with tabular MDPs. This lower bound is insightful and provides a clear motivation for the linear reward structure in the bandit feedback setting. Weaknesses: 1. The computational cost of the proposed algorithms is huge, even for the oracle efficient algorithms. The authors should provide more discussions on the computational complexity of the proposed algorithms. 2. Though this paper is the first work to study adversarial Low-Rank MDPs with unknown transition and bandit feedback, the $O(T^{5/6})$ regret bound is not very satisfying. Moreover, the results have a linear dependence on the number of actions, which is not ideal. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is polynomial dependence on the number of actions necessary? Can the authors provide some intuition about the reason or discuss the possibility of improving the dependence on the number of actions? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support and valuable feedback. Your question about $poly(A)$ dependence in regrets is explained in **Q1** of the global response. We answer your other questions below. **Q1**: *The authors should provide more discussions on the computational complexity of the proposed algorithms.* **A**: The main result in [Mhammedi et al., 2023] guarantees that the number of calls to the optimization Oracle within VoX is polynomial in $d$, $A$, $\log |\Phi|$, and $1/\epsilon$. Due to this, VoX is an Oracle-efficient algorithm. Since our algorithm (Algorithm 2 in this context) makes a single call to VoX, it automatically inherits its Oracle efficiency. All the other steps in Algorithm 2 can be performed computationally efficiently; in fact, the most computationally expensive step is the call to VoX. Our Algorithm 1 is generally computationally inefficient. The computational complexity scales with $|\Pi|$ because we need to maintain exponential weights over policy class $\Pi$. For the linear policy class defined in Line 235, the computational complexity of Algorithm 1 has order $|\Phi| T^d$. [Mhammedi et al., 2023] Mhammedi, Z., Block, A., Foster, D. J., and Rakhlin, A. (2023). Efficient model-free exploration in low-rank mdps. arXiv preprint arXiv:2307.03997 --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: I thank the authors for their response and have no further questions.
Summary: This work initiates the study on learning adversarial low-rank MDPs with bandit feedback. The authors propose an inefficient algorithm with $T^{2/3}$ expected regret, and an oracle-efficient algorithm with $T^{5/6}$ expected regret. Further, the authors also show an oracle-efficient algorithm with $T^{5/6}$ high probability regret. Strengths: 1. This work further advances the understanding of learning adversarial MDPs with general function approximation and the algorithmic designs are of interest. Weaknesses: 1. The first algorithm is generally hard to be implemented due to its computational intractability and thus might be less appealing to practitioners in the area. Besides, the algorithmic design of the first algorithm is similar to those of [1,2] in the sense that learning adversarial MDPs is somewhat reduced to learning adversarial bandits by running exponential weights over the policy set of the corresponding MDP. I would suggest the authors give more comparisons and discussions between the design of the first algorithm and the algorithms in [1,2]. [1] Kong et al. Improved Regret Bounds for Linear Adversarial MDPs via Linear Optimization. TMLR, 24. [2] Liu et al. Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback. ICLR, 24. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Eq. (12), log-barrier instead of the common Shannon entropy regularizer is used in FTRL. Is this due to that it is not feasible to bound the stability term in the analysis of FTRL when using Shannon entropy caused by the possibly very large magnitude of the constructed $Q$-function estimates? 2. Intuitively, what is the reason that Algorithm 2 still needs to operate in epochs after the phase of reward-free exploration? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support and valuable feedback. Your question about why we use log-barrier is explained in **Q2** of the global response. We answer your other questions below. **Q1**: *The first algorithm is similar to learning adversarial MDPs by reducing to learning adversarial bandits. I would suggest the authors give more comparisons and discussions between the design of the first algorithm and the algorithms in linear MDPs [1,2].* **A**: Our Algorithm 1 takes a different approach from reducing learning MDPs to linear bandits. In a linear MDP with a known feature map $\phi$ and a linear loss $\ell_h(x,a) = \phi(x,a)^\top g_h$, we have: $$V_1^\pi(x_1, \ell) = \mathbb{E}^\pi\left[\sum_{h=1}^H \ell_h(x_h, a_h)\right] = \mathbb{E}^\pi\left[\phi(x_h, a_h)\right]\sum_{h=1}^H g_h.$$ This means that the linear MDP problem can be reduced to a linear bandit instance where the action set is $\\{\mathbb{E}^\pi\left[\phi(x_h, a_h)\right]: \forall \pi \\}$ and $\sum_{h=1}^H g_h$ is the hidden loss vector. This action set can be estimated by estimating occupancy measures; is the approach taken by [1,2]. With this, [1,2] use the standard linear bandit loss estimator with estimated actions. The reduction to linear bandits in [1,2] relies crucially on the linearity of the loss functions. In contrast, our Algorithm 1 does not require linear losses. Instead, it only relies on the linear structure of the transitions (i.e. the low-rank structure). Moreover, since the feature maps are unknown in our setting, we cannot use the standard linear bandit loss estimator as in [1,2]. To address this, Algorithm 1 learns a representation in an initial phase, which is then used to compute a new loss estimator carefully designed for low-rank MDPs. **Q2**: *Why Algorithm 2 still need to operate in epochs after the phase of reward-free exploration?* **A**: The role of the reward-free phase is to learn a policy cover; a set of policies with good coverage over the state space. In the non-adversarial setting, once such a policy set is computed, finding a near-optimal policy is relatively easy. However, in the adversarial setting, the losses change across rounds and the algorithm needs to constantly estimate the $Q$-functions to play good actions. The algorithm uses epoch for this estimation task, where the larger the epoch (the more episodes in an epoch) the more accurate are the estimated $Q$-functions. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback. Could you please elaborate further on the details of the results if the negative entropy instead of log-barrier regularizer is used? What would be the concrete dependence of the results on $T$ and $A$ if negative entropy is employed? --- Reply to Comment 1.1.1: Title: Use of the negative entropy vs the log-barrier Comment: Thank you for your comments. After a more careful inspection, we found that using the negative entropy regularizer would lead to a better dependence in $A$ while keeping the dependence in $T$ unchanged compared with the log-barrier regularizer. As mentioned in the rebuttal, using the negative entropy regularizer requires $\eta |\widehat{Q}\_h^{(k)}(x, a)|\le 1$ for any $k,h,x,a$. This prevents one from choosing $\eta$ optimally for the standard loss estimator in the linear MDP setting (due to the large magnitude of the estimator). However, the estimator we use for the low-rank setting in Algorithm 2 has the property that $\widehat{Q}\_h^{(k)}(x,a) \le H\sqrt{d}$ for any $x,a,k,h$. This implies that the optimal choice of $\eta$ in our setting (which satisfies $\eta=1/\text{poly}(T)$) is well within the range implied by the constraint $\eta |\widehat{Q}\_h^{(k)}(x, a)|\le 1$. The same applies in the setting of Algorithm 3 where $\widehat{Q}\_h^{(k)}(x,a) \le 8Hd^2$. We would also like to correct a typo in our current proof. In the current analysis of the log-barrier (Lines 495 and 554), the penalty term is missing a factor of $A$. According to Lemma G.1, the correct penalty term should be $\frac{N\_{reg} A \log(T/\gamma)}{\eta}$ instead of $\frac{N\_{reg} \log(T/\gamma)}{\eta}$. On the other hand, for negative entropy, the penalty term is $\frac{N\_{reg} \log(A)}{\eta}$ from Lemma G.6. Thus, since the stability terms for both the log-barrier and the negative entropy are the same, this implies that the correct regret for log-barrier has a slightly worse dependence on $A$ compared with the negative entropy. We will change our algorithm and analysis to negative entropy in the revision (our final regret bounds will remain the same up log factors in $A$).
Summary: The paper explores regret minimization in low-rank MDPs with adversarial bandit loss and unknown transitions. This means that the transition can be expressed as $P(x' \mid x,a ) = \phi^\star (x,a)^T\mu^\star(x')$ for some unknown $\phi^\star$ and $\mu^\star$ and the loss is also linear in $\phi^\star$. They consider three distinct settings: 1. **Model-based setting**: Here, the authors assume access to function classes containing $\mu^\star$ and $\phi^\star$. They use some reward-free algorithm to obtain $\mu$ and $\phi$ that approximates the dynamics well, and then run EXP2-style algorithm on some $\epsilon$-cover of the class of linear policies. This algorithm is computationally inefficient and achieves $T^{2/3}$ regret, with some logarithmic dependencies on the size of the function classes. 2. **Model-free setting**: In this setting, they don't assume access to a class that contains $\mu^\star$. Once again, the algorithm begins with some of-the-shelf exploration phase that aim to find a policy cover. That is, a (small) set of policies that guarantees some sense of minimal reachability to all reachable states. Next, they run an FTRL-style algorithm in each state with log-barrier regularization on blocks of certain size. Within each block, the algorithm plays the policy derived from the FTRL, mixed with a uniform policy from the policy cover. The policy cover's exploration guarantee allows for good least-squares estimation of the Q-function for the policy employed in that block, subsequently used in the FTRL. This achieves $T^{5/6}$ regret against oblivious adversary. 3. **Model-free setting with Adaptive adversary**: Here, the adversary adjusts their strategy based on the learner's actions, and there is additional assumption that the the loss features are known (and the dynamics feature map is still unknown). The algorithm is similar to the algorithm in 2. but adds some representation learning algorithm on top of the reward-free algorithm which suggested to be beneficial against adaptive adversary. The regret bound obtained in this setting is also of order of $T^{5/6}$. Strengths: - Low-rank MDPs and representation learning are important areas of study in reinforcement learning, particularly due to the relevance of representation learning to deep RL. The paper advances research in this regime by extending previous results from full-information settings to bandit feedback. - The paper introduces three algorithms, each tailored to a distinct setting. These algorithms manage to achieve similar regret guarantees to previous works that dealt with the full-information case (although the settings are not directly comparable). Additionally, the authors provide a compelling justification for why they cannot handle arbitrary (non-linear) loss as in the full-information case by presenting a lower bound that scales with the number of states. Weaknesses: - The approach taken in Algorithm 1 is somewhat generic, employing an off-the-shelf reward-free exploration phase followed by an inefficient EXP2 algorithm. Most of the complexity introduced by the unknown feature maps is largely addressed by leveraging existing reward-free algorithms, possibly underplaying the novelty or the specific challenges directly tackled by the new algorithm itself. - Section 5 lacks detailed explanations. The section does not adequately clarify what specifically becomes more challenging in the adaptive adversary setting, nor does it thoroughly explain how the spanner policies effectively address these challenges. The description merely states that these policies are necessary *"because the estimation of the Q-functions becomes much more challenging with an adaptive adversary"*. This lack of detail leaves the reader without a clear understanding of the problem mechanics or the solution’s efficacy in this context. Technical Quality: 3 Clarity: 2 Questions for Authors: Could you please clarify how (41) is obtained from Lemma G.5? (41) bounds the average distance, while, to my understanding, G.5 bounds the distance on the data. It feels like some argument on the generalization gap is missing. - Is there a specific reason to use log-barrier and not negative entropy regularization in algorithm 2? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: As far as I'm concerned, the authors address most of their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and for your valuable feedback. Your questions about the use of the log-barrier and the difficulty of dealing with an adaptive adversary are explained in **Q2** and **Q3** of the global response. We answer your other questions below. **Q1**: *In Algorithm 1, most of the complexity introduced by the unknown feature maps is largely addressed by leveraging existing reward-free algorithms, possibly underplaying the novelty or the specific challenges directly tackled by the new algorithm itself.* **A**: The main novelty of Algorithm 1 lies in the design of the loss estimators; in a low-rank MDP where the feature map is unknown, standard loss estimators used for linear MDPs fail. The challenge here is that the loss estimators have to be carefully designed to account for any errors in the learned representation from Phase 1. This is a non-trivial challenge that existing methods before our paper do not address. **Q2**: *Could you please clarify how (41) is obtained from Lemma G.5? It feels like some argument on the generalization gap is missing.* **A**: You are right. Equation (41) follows from Lemma G.4 and Lemma G.5. We will add the reference to Lemma G.4. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have read through all the other reviews and your responses. At this point, I do not have any further questions. I will reconsider my score based on the discussions and additional inputs provided during the current phase and the next phase of the review process.
Rebuttal 1: Rebuttal: ## **Global Response** We thank all the reviewers for their valuable feedback. We would like to clarify several common concerns here. **Q1** : *The $poly(A)$ dependence in the regret bound looks not ideal. Is it common in low-rank MDP literature?* **A**: We would like to point out that poly-dependence of action set size $A$ is unavoidable anyway even in non-adversarial low-rank MDPs with linear loss; this follows the lower bound from Theorem 4.2 and Appendix B of [Zhao et al. 2024]. [Zhao et al. 2024] Zhao, C., Yang, R., Wang, B., Zhang, X., and Li, S. (2024). Learning adversarial low-rank markov decision processes with unknown transition and full-information feedback. NeurIPS. **Q2**: *Why we use log-barrier?* **A**: For the $Q$-function estimator $\widehat{Q}$ with learning rate $\eta$, negative entropy regularization requires $\eta \widehat{Q} \ge -1$ to ensure the stability term in the regret is small. Using the log barrier, we can get rid of this constraint, leading to more flexibility in the choice of $\eta$ and ultimately a better regret bound (compared to the negative entropy) after tuning $\eta$. The price of using the log-barrier is a poly-dependence in the number of actions $A$. However, as discussed in **Q1**, such dependence is unavoidable for low-rank MDPs. **Q3**: *Section 5 lacks detailed explanations. Why does the estimation of the Q-functions become much more challenging against an adaptive adversary without knowing the loss feature map?* **A**: Our algorithm for oblivious adversaries works in epochs, where at each epoch $k$ the algorithm commits to a policy $\widehat{\pi}^{(k)}$ for $N_{reg}$ episodes. The algorithm then uses these trajectories to compute an estimate $\widehat{Q}^{(k)}$ of the average $Q$-function $\frac{1}{N_{reg}} \sum_{t \text{ in epoch } k}Q^{\widehat{\pi}^{(k)}}_h(\cdot,\cdot ;\ell^t)$. In the case of an oblivious adversary, this estimation problem can be reduced to a standard regression problem with i.i.d. data. This is possible because the regression target is an average of $Q$-functions over $t$ episodes in epoch $k$, and this target is invariant to any permutation of the episodes $t$ thanks to the fact that losses are oblivious. And so, by randomly shuffling the episodes in epoch $k$, one can reduce the regression problem to an i.i.d. one. When dealing with an adaptive adversary, the order of the episodes within an epoch matters and the average $Q$-function targets are no longer invariant to permutations of the episodes. For this reason, we cannot rely on standard regression guarantees with i.i.d. data.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Accept (spotlight)
Summary: The paper presents a novel initialization strategy for LoRA finetuning where A and B are initialized according to the top r singular values and the pre-trained weights are initialzed with the remaining components. The authors show that this decomposition can help finetuning converge faster and behaves favorably compared to traditional quantization techniques such as NF4. The authors conduct experiments on NLG and NLU tasks show show performance improvements of PiSSA over vanilla LoRA. Strengths: **Presentation** The theoretical motivation of the paper is clear and the method is well motivated and explained. **Empirical results** The authors provided several supporting ablations and experiments on improved quantization error compared to LoftQ and QLoRA, faster convergence etc. Weaknesses: **Significance of results** The authors do not provide error bars in their results which questions whether the obtained results are actually statistically significant. Error bars should be provided as well as significance tests should be performed. **Unsupported claims** The authors claim that LoRA gradients point in random directions early during the fine-tuning stage which introduces wasteful update steps. Was this claim experimentally verified? If so, the authors should clarify how these results were obtained and elaborte on it in more detail in the paper. **Comparison to baselines** PiSSA is not compared to other commonly used initialization schemes, e.g., uniform gaussian [1], or kaiming init [2] for NLU and NLG experiments. Further, the authors compare PiSSA only to standard LoRA, but there have been several improvements to LoRA, such as AdaLoRA [3], or DoRA [4]. A comparison to those extensions would strengthen the paper. Also, the authors may investigate whether their initialization could be applied to these extensions to further improve perormance. Finally, some related work uses a similar initialization by considering principal components [5]. [1] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022 [2] He et al., & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification., ICCV 2015 [3] Zhang et al., AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning, ICLR 2023 [4] Liu et al., DoRA: Weight-Decomposed Low-Rank Adaptation, ICML 2024 [5] Sheng et al., S-LoRA: Serving Thousands of Concurrent LoRA Adapters, MLSys 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: - In line 174 the authors mention that $\alpha=r$, why was this choice made? In [1] $\alpha$ is set to a constant. Setting $\alpha=r$ ultimately results in different learning rates for different r. - The advantage of of quantized PiSSA over QLoRA might come from outlier removal, was there any experiments done comparing QPiSSA to efficient quantization techniques such as GPTQ [2] or llm.int8 [3]? - In line 188 the authors mention that training for NLG experiments was only done for 1 epoch. It would be interesting to see if LoRA ultimately converges to a similar optimum after longer training, or whether the improved initialization also leads to a better optimum. - What do the entries in the legend of Figure 2b mean exactly? [1] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022 [2] Frantar et al., GPTQ: Accurate post-training quantization for generative pre-trained transformers., ICLR 2023 [3] Dettmers et al., LLM.int8(): 8-bit matrix multiplication for transformers at scale., NeurIPS 2022 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The fact that PiSSA needs to store 2r ranks compared to r ranks for LoRA which is mentioned in Appendix C should be also mentioned under Limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive review. To answer your questions thoroughly, we have conducted numerous new experiments. We hope these efforts will address your concerns. **Q1: Need to provide error bars in the results** **A1**: The original paper includes extensive experiments cover 12 models, 13 tasks, 1k to 9k training steps, and 1 to 128 ranks. It compares PiSSA, QPiSSA with LoRA, QLoRA, LoftQ and Full fine-tuning. Running all these experiments multiple times is very time and resource-intensive. Despite this, PiSSA consistently demonstrates significant convergence speed and performance advantages, suggesting the improvement is statistically significant. The improvements of PiSSA over LoRA are notably highlighted in **Table 1** **and** **Table 2 of General Response**. The error range is very small compared to the improvement. **Q2: Comparison to Gaussian or Kaiming initialization** **A2**: **Table 1** and **Table 2** show that PiSSA outperforms both Gaussian and Kaiming initialization schemes, as they still use a zero-random initialization for the adaptor without touching the frozen $W$, while PiSSA for the first time moves the principal components out of $W$ to initialize the adaptor for directly fine-tuning the core model. **Q3: Comparison to AdaLoRA or DoRA** **A3**: Please refer to the response to **Q1 of Reviewer x9Pi**, and **Q2 of Reviewer Bb8M**. PiSSA outperforms both methods. **Q4: Combining AdaLoRA or DoRA with PiSSA** **A4**: Since PiSSA's structure is identical to LoRA's, it can be combined with other advanced LoRA variants. The specific new challenges introduced by combining PiSSA with different methods are, however, beyond the scope of this work. Nevertheless, we have evaluated the combination of PiSSA with AdaLoRA. Details can be found in **Q2 of Reviewer Bb8M**. PiSSA+AdaLoRA shows superior performance than other methods in **Table 1** including PiSSA. Due to time and resource constraints, we could not finish experiments combining DoRA and PiSSA. We are actively conducting these experiments, and will post the results in follow-up comment. **Q5: Comparing QPiSSA to GPTQ or int8** **A5**: QPiSSA is not a quantization technique, but applies NF4 quantization to PiSSA’s frozen weights; it can also be combined with GPTQ or int8. We believe PiSSA reduces quantization error due to the following reasons: 1. Reducing outlier values; 2. Making the value distribution closer to a Gaussian distribution; 3. Preserving larger values in full precision, which narrows the weight distribution of the quantized part. Int8 also focuses on the first point, and PiSSA could further improve it. The second point is friendly to NF4. The third point is also crucial, as PiSSA uses adaptor to preserve a great portion of weights in full precision. When compared to int8, QPiSSA+int8 reduces quantization error by **23.58%** on LLaMA-3-8B. In **Table 1**, rows 4 and 8 compare QLoRA and QPiSSA using int8, showing that QPiSSA still significantly outperforms QLoRA. We will post the results of combining PiSSA and GPTQ in the follow-up comment. **Q6: LoRA gradients point in random directions** **A6**: Please refer to the response to **Q1 of Reviewer XzvL**. **Q7: S-LoRA uses principal components initialization** **A7**: Although the term "principal components" appears in the third section of the S-LoRA paper, it is not a method focusing on adapter initialization or PEFT. S-LoRA is actually a system designed for the scalable serving of up to 2,000 LoRA adapters. In PEFT, terms like low-rank decomposition and SVD often appear. They mostly refer to using the product of low-dimensional matrices to approximate an ideal $\Delta W$ through learning. These methods **do not actually perform decomposition** on any matrix. To our knowledge, PiSSA is the first study that directly performs SVD on the original model and extracts the principal singular values and vectors for fine-tuning. **Q8: Why is alpha set to r?** **A8**: Since PiSSA extracts the model's principal singular values and singular vectors for fine-tuning, the scaling factor (=alpha/r) in the adapter needs to be set to 1 to ensure adaptor + frozen weights equals the pre-trained weights initially. Nevertheless, it is also possible to set alpha to other values as in LoRA. In NLU tasks, we have experimented with changing this hyper-parameter to other values. **Q9: Can LoRA ultimately converges to a similar optimum after longer training?** **A9**: In Sections **5.2** and **5.3** of the original paper, we conducted experiments with larger datasets and longer training steps. And the results show that PiSSA/QPiSSA indeed achieve lower loss than LoRA/QLoRA, indicating convergence to a better optimum. **Q10: Meaning of ‘iter’ in Figure 2b** **A10**: The meaning of 'iter' is explained in Appendix E: simply put, it means iteratively executing PiSSA initialization multiple times, compressing the quantization error into the adapter each time to further reduce the quantization errors. **Q11: Convert to LoRA need 2r ranks** **A11**: There are three ways of using PiSSA. The first way is to save the adapter with rank $r$ and replace the original model with the residual model. However, sometimes people want to keep the original model unchanged. Thus the second way is to only save the adaptor with rank r without saving the residual model, and recover the residual model when needed by performing SVD again. Since fast singular value decomposition can complete PiSSA initialization within a few seconds to tens of seconds, the second way is also feasible in practice. The third way, which is what is discussed in Appendix C, saves a rank-2r adaptor to avoid SVD (trading-off space for time) and converts a PiSSA adaptor equivalently to a LoRA adaptor during inference. Therefore, we believe that the ability of PiSSA to equivalently convert to 2 times the rank of LoRA is not a limitation but an advantage, providing more flexibility for using PiSSA. --- Rebuttal 2: Title: Completing the Remaining Experiments Comment: **Dear Reviewer M1T9**, In your questions Q4 and Q5, you suggested integrating PiSSA with both AdaLoRA and DoRA, as well as combining PiSSA with llm.int8 and GPTQ. During the rebuttal period, due to time constraints, we were only able to combine PiSSA with AdaLoRA and llm.int8, and the experimental results have shown the advantages of PiSSA when integrated. Over the past few days, we have completed the experiments combining PiSSA with DoRA and GPTQ, and the results are presented in the table below: | Method | Run 1 | Run 2 | Run 3 | GSM8K Average | | --- | --- | --- | --- | --- | | Full FT | 74.89 | 74.22 | 74.07 | 74.39±0.356 | | LoRA(gaussian-init) | 71.11 | 71.19 | 70.74 | 71.01±0.199 | | LoRA(kaiming init) | 72.25 | 71.57 | 71.95 | 71.92±0.279 | | **PiSSA** | **76.72** | **76.72** | **76.80** | **76.75±0.036** | | AdaLoRA | 72.48 | 72.42 | 72.02 | 72.31±0.202 | | **PiSSA+AdaLoRA** | **78.77** | **78.32** | **78.69** | **78.59±0.199** | | DoRA | 72.18 | 72.33 | 72.63 | 72.38±0.189 | | **PiSSA+DoRA** | **77.86** | **77.41** | **77.26** | **77.51±0.257** | | QLoRA+int8 | 71.78 | 71.48 | 71.78 | 71.68±0.143 | | **QPiSSA+int8** | **76.18** | **76.48** | **76.95** | **76.54±0.318** | | QLoRA+GPTQ-4bit | 70.51 | 70.43 | 69.60 | 70.18±0.419 | | **QPiSSA+GPTQ-4bit** | **74.60** | **74.30** | **74.83** | **74.58±0.217** | **Q4: Combining AdaLoRA or DoRA with PiSSA** **A4 part 2:** From lines 4, 7, and 8 of the table, it is evident that the performance of PiSSA combined with DoRA significantly surpasses that of DoRA alone and also exceeds the performance of PiSSA alone. Taking into account the combination experiments of PiSSA with AdaLoRA, it can be inferred that PiSSA benefits from the enhancement techniques of LoRA, demonstrating the potential of PiSSA when integrated with other methods. **Q5: Comparing QPiSSA to GPTQ or int8** **A5 part 2**: GPTQ handles each row $w$ independently, quantizing one weight at a time while updating all not-yet-quantized weights. Therefore, the method of using nuclear norm to calculate quantization error, as used in our paper, is not applicable for GPTQ. Thus, we turn to use Perplexity on WikiText-2 to calculate quantization error, where a lower Perplexity indicates reduced quantization error. The Perplexity of LLaMA-3-8B used in our experiments is **5.14**. After quantization with GPTQ-4bit, the Perplexity increased to **20.79**. However, using PiSSA, we were able to reduce the Perplexity to **6.23**. These results validate the effectiveness of PiSSA in reducing quantization error as discussed in our paper. From lines 11-12 of the table, it is evident that the performance of QPiSSA combined with GPTQ-4bit significantly surpasses that of QLoRA combined with GPTQ-4bit. This confirms the advantages mentioned in our paper, where QPiSSA retains the rapid convergence and superior performance characteristics of PiSSA. We believe we have addressed all of your questions comprehensively. If you have any additional queries or require further clarification, please do not hesitate to contact us. We are committed to providing any necessary information. If you find the contributions of our paper and the supplementary experiments we have provided to be valuable, might we kindly ask you to consider adjusting your score accordingly? --- Rebuttal Comment 2.1: Comment: I would like to thank the authors for the detailed response and commend them for the amount of experiments delivered during the rebuttal. I would like to highlight that claiming statistical significance based on the outcome of different experiments without verification is not convincing. Verifying statistical significance should be done for each experiment by providing variance estimates. Having said that, I greatly appreciate the reported error bars for GSM8K. I strongly recommend to the authors to report variance estimates for the remaining experiments as well and to verify statistical significance of their results. Further, the additional findings for applying PiSSA to other PEFT methods such as DoRA and AdaLoRA look very promising. Therefore I have decided to increase my score. --- Reply to Comment 2.1.1: Title: Thank you for your suggestions and recognition Comment: In the camera-ready version, we will repeat the key experiments multiple times, include error bars, and verify statistical significance. Your professional insights have been invaluable in improving the quality of the PiSSA paper. We will also incorporate the additional experiments conducted during the rebuttal period into the final version of the paper. Thank you again for your suggestion towards PiSSA.
Summary: The authors present PiSSA, a relatively simple change to the LoRA framework leading to a large amount of demonstrated benefits. PiSSA proceeds by adjusting the initialization step of standard LoRA; rather than freezing the original weight matrix W and learning a low-rank perturbation W' = W + BA, PiSSA instead considers the SVD W = USV^T = U[:,r:]S[r:,r:]V[:,r:]^T + U[:,:r]S[:r,:r]V[:,:r]^T = W^{res} + AB, s.t. A=U[:,:r]S^{0.5}[:r,:r], B = S^{0.5}[:r,:r]V[:,:r]^T. Due to this backwards compatibility, PiSSA enjoys the fine-tuning speed/memory savings of LoRA while also being easily implementable to existing PEFT packages. Most importantly, the authors demonstrate across a large number of models, fine-tuning recipes, and benchmarks that PiSSA greatly improves on downstream accuracy compared to LoRA. Furthermore, the authors show that this initialization + fine-tuning strategy also leads to significant decreases in quantization error, outperforming both QLoRA and the recently proposed LoftQ both in terms of training loss (e.g., faster loss reduction) and performance for GSM8K and MATH tasks. Strengths: **Originality** - the specific algorithm is an original, lightweight alteration to the LoRA framework. In particular, the fact that there are no additional hyperparameters and, after the initialization phase, PiSSA is essentially swappable with LoRA implementations are all significant benefits of the method. Furthermore, PiSSA's use of only fine-tuning the principle components in a LoRA fashion lends itself to strong intuition as to why the method leads to such strong downstream fine-tuning performance, and why LoRA is less successful in comparison. The latter point is also true when considering quantization. **Quality** - the extensive experiments across models and fine-tuning evaluations well convey the benefits of this method over standard LoRA. This is also true of the extensive quantization experiments. **Clarity** - for the most part, the paper is well written. However, there are specific points where the paper could improve clarity (see weaknesses below). **Significance** - the performance benefits and backwards compatibility lead to significant opportunity for future impact of this work. Weaknesses: # Clarity There are several areas the paper could improve in clarify. In particular, the literature review requires more granularity, e.g., "AdaLoRA [42, 41, 43]," only one of these works is AdaLoRA. Furthermore, there is overlap between AdaLoRA and PiSSA in both their reliance on the SVD. It is necessary to discuss exactly how AdaLoRA leverages SVDs, and how this differs from PiSSA. There are several experimental details missing, e.g., are the GSM8K evals 8-shot? What are the shots used for the other benchmarks? Were the benchmarks run using the Eleuther eval harness? Please include these details in the paper. Paper at times states it is similar to LoRA and, at other times, seeks to differentiate itself: - "Unlike LoRA and its successors, which focus on learning low-rank approximations of weight updates, our PiSSA approach directly tunes the essential but low-rank parts of the model while keeping the noisier, high-rank, and nonessential parts frozen. Although our approach differs in philosophy from LoRA, it shares most of LoRA’s structural benefits and can be extended by these methods to enhance its performance" - "Since PiSSA shares the identical architecture with LoRA, it inherits most of LoRA’s benefits." This was confusing on an initial pass of the paper. A short, succinct list enumerating the similarities and differences early on may help readers understand exactly how PiSSA distinguishes itself from LoRA, but how it is able to inherit all of LoRA's PEFT benefits. # Quality The authors consider the weight matrix of a single attention layer in Llama-2 in Figure 2: "Since the residual model has removed the large-singular-value components, Wres has a narrower distribution than that of W, as can be seen in Figures 3a and 3b (comparing the singular value distributions of W and Wres), as well as Figures 3c and 3f (comparing the value distributions of W and Wres), which is beneficial for reducing the quantization error" <- However, since this only looks at a single layer of a single model, it hard to accept this as evidence for a general claim. Can you include a comparison of the distributions of mu/sigma for W and W^{res} (e.g., mean absolute differences between means and ratio between sigmas) across all layers for Llama2 and another model (e.g., Mistral) to show that, in general, W^{res} has a narrow distribution than W? Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses. Minor comment: "loftQ" in Table 3 <- "LoftQ" Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors have addressed potential limitations of the demonstrated method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the originality, quality, and significance of our article. We also appreciate your suggestions for improving the writing. As we cannot modify the original text during the rebuttal period, we will incorporate your recommendations in the camera-ready version. **Q1: In the related work section, AdaLoRA [42, 41, 43] is mentioned, but only one of these works is AdaLoRA.** A1: Thank you for pointing this out. Due to the extensive content of PiSSA, we had to condense the text multiple times, leading to this issue. All these three papers dynamically adjust the rank of LoRA at each layer, so we used AdaLoRA as a collective term. We found similar issues in the subsequent DeltaLoRA [44, 45] citations. We will complete the citation information. **Q2: There is overlap between AdaLoRA and PiSSA in their reliance on the SVD. Discuss exactly how AdaLoRA differs from PiSSA.** A2: AdaLoRA introduces three improvements over LoRA: 1. Trainable parameters in AdaLoRA are changed to $A, B$, and $E$. $A$ and $B$ are Gaussian-initialized, and $E$ is a zero-initialized $r$-dimensional vector, making $A diag(E) B = \Delta W$, similar to singular value decomposition. 2. A regularization loss $\|AA^T-I\|+\|B^TB-I\|$ is used to make $A$ and $B$ gradually orthogonal during training, resembling the SVD of $\Delta W$. 3. An initial large rank is set, and less important E values are gradually masked during training, resulting in different final ranks for each layer, achieving better performance with the same number of parameters. Despite the extensive use of SVD terms, AdaLoRA **does not perform actual SVD on any matrix**. In the PEFT domain, terms like low-rank decomposition, and singular value decomposition often appear. They generally refer to products of low-dimensional matrices approximating an ideal $\Delta W$ without actual matrix decomposition. To our knowledge, PiSSA is the first to perform SVD on the original model, fine-tuning the principal component while keeping the residual model frozen. During the rebuttal, we have evaluated PiSSA against AdaLoRA on GSM8K and GLUE tasks. The results in **Table 1 and Table 2 of General Response** demonstrate that PiSSA outperforms AdaLoRA. Additionally, PiSSA and AdaLoRA represent different improvements to LoRA, making them combinable. Therefore, we additionally improved PiSSA based on AdaLoRA's three innovations: 1. After extracting the principal singular values and vectors of $W$, we use $S$ as an independent trainable vector instead of multiplying it into $U$ and $V$. 2. Since PiSSA's $U$ and $V$ are orthogonal at the beginning, maintaining their orthogonality through orthogonal regularization is very easy. 3. Although AdaLoRA claims to dynamically reduce the number of trainable parameters, the initially large number of parameters is not truly pruned, resulting in more parameters being updated during actual training. Therefore, we did not use this improvement. PiSSA, with its intrinsic principal singular values and orthogonal singular vectors, is very suitable for combination with AdaLoRA. According to **Table 1 of General Response,** The performance of the improved PiSSA surpasses all the other methods including PiSSA. **Q3: Are the GSM8K evaluations 8-shot?** A3: This article uses the zero-shot evaluation prompt for GSM8K and MATH following MetaMath [1], the zero-shot evaluation prompt for humanEval and MBPP following WizardCoder [2], and the zero-shot single answer grading prompt for MT-Bench [3]. We will provide detailed settings and corresponding code in the supplementary materials. [1] Yu, Longhui, et al. "MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models." ICLR24. [2] Luo, Ziyang, et al. "WizardCoder: Empowering Code Large Language Models with Evol-Instruct." ICLR24. [3] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." NeurIPS 2024. **Q4: A short, succinct list enumerating the similarities and differences early on may help readers understand.** A4: Thank you for the suggestion. PiSSA has the same structure as LoRA, making it easy for users to switch from LoRA to PiSSA for fine-tuning. However, PiSSA's different initialization leads to distinct optimization directions, faster convergence, better training results, and lower quantization errors. We will list the similarities and differences between PiSSA and LoRA in the introduction section of the camera-ready version for easier understanding. **Q5: Only looks at a single layer of a single model is hard to accept this as evidence for $W_{res}$ having a narrower distribution than that of W.** A5: To intuitively compare the distribution differences between quantized original and residual models, we took LLaMA 2-7B's first Query layer as an example to illustrate the distribution of $W$ and $W_{res}$. As you suggested, using only one layer of one model is not statistically significant. In **Table 5 of General Response**, we applied PiSSA initialization to LLaMA-2-7B, Mistral-7B, Gemma-7B, and LLaMA-3-8B, and fit the values in every linear layer with Gaussian distribution and calculated their mu and sigma in the table. The results show that the residual models' means are closer to 0, and the standard deviations are smaller after PiSSA initialization. Thus, $W^{res}$ indeed has a narrower distribution than W in a statistical sense. Nevertheless, the difference is not as large as that in the first layer after averaging all layers, which we suspect is because middle layers in a model tend to have more even eigenvalue distributions due to redundancy and insufficient training. We will include this statistical result in the supplementary materials and reflect it in the main text. **Q6: Minor comment: "loftQ" in Table 3 <- "LoftQ".** A6: Thank you once again for your careful reading and helpful suggestion. Your feedback is greatly appreciated, and we will incorporate all your suggestions in the camera-ready version. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal Comment: I thank the authors for their response to my original questions. I have read the other reviews and respective authors responses, and have no further questions or concerns. --- Reply to Comment 1.1.1: Title: To Reviewer Bb8M Comment: We sincerely appreciate the professionalism and sense of responsibility you demonstrated throughout the rebuttal period. Your valuable suggestions have been instrumental, and we will incorporate them into the camera-ready version. We believe your recommendations will greatly enhance PiSSA's contribution to the field of efficient fine-tuning of LLMs.
Summary: In this paper, the authors proposed a modified low-rank adaptation (LoRA) method for fine-tuning large pretrained models. Specifically, the proposed method initializes the A and B matrices with singular matrices and freezes the weight matrix to be the residual. Further, the authors provide a quantization step to reduce the memory consumption. The authors demonstrate the effectiveness of this proposed method on a lot of models and tasks, showing that this method converges faster than the conventional LoRA method. Strengths: - Overall, this paper is well-written. The author did a great job of presenting the new idea with details. The contents are organized well. - The authors comprehensively evaluated the proposed method, both quantitatively and qualitatively. - The proposed method is simple, yet effective. Weaknesses: - It would be great if the authors could provide more insight into the proposed method. For example, is it possible to compare the subspace of the gradient of conventional LoRA? - The proposed method utilizes the first r eigenvectors. How about other dimensions in the subspace? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the comments in the "limitation" block. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is no negative societal impact as far as I can tell. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of PiSSA as a simple yet effective method, and for your appreciation of the article's quantitative and qualitative analysis. Here are responses to the concerns raised: **Q1: Provide more insight, for example, comparing the subspace of the gradient of conventional LoRA?** **A1**: PiSSA potentially provides many insights for the PEFT area: 1. To reduce fine-tuning parameters, LoRA and many of its variants add a low-rank adaptor $\Delta W = AB$ to $W$, and initialize $\Delta W$ with zero to ensure $W + AB$ initially does not change the model. However, PiSSA **for the first time** touches the frozen $W$ by decomposing $W$ into principal and residual components with SVD, which are used to initialize $\Delta W$ and $W^{res}$ respectively. This idea has already inspired many follow-up works studying the optimal distribution plans for $\Delta W$ and $W^{res}$, which contributes greatly to the community. 2. By putting the principal components of $W$ into the adaptor $AB$, PiSSA can directly fine-tune the core functions of the model instead of fine-tuning the random/zero-initialized $AB$ as in LoRA. This leads to faster convergence and higher performance. From the gradient direction perspective, we have shown in lines 50-55 and lines 123-131 of our paper that the **initial gradient for LoRA is zero/random**, thus could **waste much time around the initial point**, while PiSSA optimizes along the gradient direction of the model's principal components, thereby approximating the effect of full parameter fine-tuning. The toy experiment in Figure 2a using the MNIST dataset and an MLP with 2 linear layers visually demonstrates that PiSSA and full fine-tuning **converge much faster** compared to LoRA. In the supplementary materials, the loss_landscape.gif shows this process dynamically. In LLM experiments, the advantage of PiSSA over LoRA and the similarities between PiSSA and full fine-tuning can also be observed from the loss and grad norm during the training process in Figure 4a and 4b. 3. Compared to LoRA + quantization, PiSSA + quantization preserves the principal singular values and singular vectors in full precision while quantizing the residual part $W^{res}$ instead of the full $W$. Since the large singular values have been removed from $W^{res}$, its value distribution is much narrower than $W$, which makes quantization easier (the larger variance of the value distribution, the larger error quantization will introduce). This can be seen in Figure 3 and 5, where QPiSSA greatly **reduces the quantization error** and shows better performance than QLoRA. Regarding your question about comparing the subspace of gradients of PiSSA and LoRA, we further validated our analysis through two additional experiments. 1. We trained LLaMA-3-8B on the MetaMath dataset 5 times, each time initializing LoRA with different random seeds while using the same batch of 128 training examples to calculate LoRA's gradients. After dimensionality reduction to two, we list the results in **Table 3 of General Response**. We can observe that the gradient of matrix $A$ is consistently zero, and the gradient direction of $B$ varies with each initialization. This occurs because the gradient of matrix $A$ depends on matrix $B$, which is initialized to zero in LoRA, resulting in a zero gradient for matrix $A$. Conversely, the gradient of matrix $B$ is influenced by matrix $A$, which is initialized from Gaussian, hence the varying gradient directions of matrix $B$ in each experiment. In contrast, the gradient direction in PiSSA remains consistent across all five training seeds and only depends on the original model and the training data. This experiment underscores the stability of PiSSA's optimization direction relative to LoRA. 2. Furthermore, we quantitatively compared the effects of updating using the principal singular value direction v.s. “random” direction in the early stages of fine-tuning. We trained LLaMA-3-8B on the MetaMathQA dataset using PiSSA and LoRA, saving the parameters and gradients for the first 50 iterations. At the 50th step, the losses for LoRA and PiSSA dropped to 0.3677 and 0.2899, respectively. We took the parameters at the 50th step as the target point. Using the direction and distance from the initial parameters to the final step's parameters as a reference, we compared the first 5 steps for LoRA and PiSSA. For each step, we determined how much was moved in that direction, then divided by the target distance to obtain a ratio. The ratios are recorded in **Table 4 of General Response**. As shown in the first 2 rows of the table, after just 5 updates, PiSSA's loss reduced from 0.8884 to 0.3346, whereas LoRA only reduced to 0.5538, reflecting the advantage of the principal singular value direction over the zero-random direction in convergence speed. In rows 3-4 of the table, matrix $A$ in LoRA had a gradient of 0 at the first step and thus did not update. In the following 4 steps, it only moved 15.94% towards the target direction. In rows 5-6 of the table, matrix $B$ in LoRA always moved less towards the endpoint in the target direction compared to PiSSA. **Q2: Besides the first r eigenvectors. How about other dimensions in the subspace?** A2: Our PiSSA puts the first $r$ singular values and vectors in the adaptor $A,B$ and the remaining ones in the frozen $W^{res}$. In Figure 8 of the Appendix, we compared the training outcomes of using the principal, middle, and minor $r$ singular values and singular vectors to initialize $A,B$. The experimental results evident that training with the principal singular values results in lower loss and higher accuracy. --- Rebuttal 2: Title: Official Comment by Authors Comment: **Dear Reviewer XzvL**, The discussion period has passed the **halfway** point, and we have addressed the questions you previously raised. Do you have any additional questions or concerns that you would like to discuss? Our paper investigates the **gradient behavior of LoRA**, revealing that $A$ and $B$ initially have **zero gradients** and **random gradient** directions. This leads to **slow convergence** and potentially **suboptimal local minima**. To address these issues, we propose the PiSSA initialization method. **PiSSA approximates the optimization direction of full fine-tuning** by fine-tuning the principal components of a model. To our knowledge, PiSSA is the **first** method to use SVD on the original model, employing the principal singular values and vectors to initialize the adapter and fine-tune it, while fixing the remaining parts of the model during training. Our experiments demonstrate that PiSSA not only **converges faster** but also achieves **better final results** compared to LoRA. The initialization process is **efficient**, taking only seconds to minutes due to the use of fast singular value decomposition. We extend PiSSA by integrating it with **NF4**, **llm.int8**, and **GPTQ** quantization to create QPiSSA. This approach significantly **reduces quantization error** compared to QLoRA while retaining the **fast convergence** and **strong performance** of PiSSA. PiSSA modifies only the initialization method used by LoRA, making it compatible with various LoRA enhancements. We **combined PiSSA** with **LoftQ**, **AdaLoRA**, and **DoRA**, and our results show that PiSSA+ outperforms these methods and PiSSA, demonstrating its potential for further improvements. Our extensive experiments include **5 NLG** and **8 NLU** tasks using **12** different models ranging from **184M to 70B** parameters. We compared performance across **1k-9k+** training steps and ranks from **1 to 128**, evaluating against methods such as **LoRA**, **QLoRA**, **LoftQ**, **DoRA**, **AdaLoRA**, and **full-parameter fine-tuning**. We also examined the effects of initializing with **principal**, **medium**, and **minor** singular values and vectors. If you find the content in the main paper and our rebuttal satisfactory, would you consider revising your score? --- Rebuttal Comment 2.1: Title: Looking forward to your response Comment: Dear Reviewer XzvL, We have addressed the concerns you raised in your initial review and have submitted a detailed rebuttal. We are writing to confirm whether you have had the opportunity to review our responses. We hope that our rebuttal has addressed your questions satisfactorily and would appreciate if you could reconsider the contributions of our manuscript, PiSSA, along with the efforts we have made during the rebuttal process. We are eager to hear your feedback and are open to any further discussion that might help clarify any remaining issues. Looking forward to your response. Best regards, Submission 5120 Authors
Summary: The paper presents a method for creating lora-like fine-tuning adapters for base models. The idea is to initialize W, A and B matrices in LoRA from SVD decomposition. W is consists of less important ranks while A and B consist of the most important ranks. This implies that less important principle components of the model remain frozen while the most important ones get updated. Another benefit of this scheme is that A and B can be in high precision while W is in low precision, thus ensuring a reduced quantization error. Strengths: I think this is a good paper with good results and a potential for significant impact as adapter tuning becomes more mainstream. The paper is written well, the method is intuitive, and the experimental results are thorough. Weaknesses: As authors note in the limitations section there are many more experiments they could have conducted, but those are future scope. I am happy with the current scope and experiments. Having said, that I wish the authors had included at least a few experiments comparing advanced LoRA adapter methods. DoRA is mentioned in related works, but I couldn't find any comparison with it. Technical Quality: 4 Clarity: 4 Questions for Authors: Will the authors be releasing their code? I believe it is very important that they do as our community could greatly benefit from being able to reproduce the experiments here and build on top of it. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations section in the paper is good enough for me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the originality of PiSSA, the quality of our experiments, and our contributions to the community. Here are the answers to your queries: **Q1: Comparison with advanced LoRA adapter methods, e.g., DoRA** **A1**: In our paper, we demonstrated PiSSA's effectiveness through large amount of experiments on 5 NLG and 8 NLU tasks, utilizing models ranging from 184M to 70B parameters. Our comparisons included 1k-9k+ training steps and ranging from 1 to 128 ranks, against methods such as LoRA, QLoRA, LoftQ, and full-parameter fine-tuning. PiSSA modifies only the initialization method used by LoRA, making it orthogonal to some LoRA enhancements. Initially, due to time and resource limitations, we did not compare PiSSA with DoRA. However, upon your request, we have now included these comparisons. DoRA adds a learnable magnitude module to LoRA, normalizing $W + AB$ at each update step and multiplying its by the magnitude module. This allows $A, B$ to learn the direction and the magnitude module to learn the magnitude of $\Delta W$. While this approach can improve fine-tuning performance, normalizing $W + AB$ at each step results in slower fine-tuning speeds. In contrast, PiSSA only changes LoRA's initialization method, matching LoRA in training speed and converging faster, thereby reducing training costs. We compared PiSSA with DoRA on LLaMA-3-8B for GSM8K and DeBERTa-v3-base for GLUE tasks, repeating each experiment three times and recording the results in **Table 1** and **Table 2**. From these tables, PiSSA significantly outperforms DoRA, as DoRA still uses zero and random initialization. **Q2: Open source schedule** **A2**: PiSSA has greatly benefited from the open-source community, and we are excited to contribute back. PiSSA only modifies the initialization method of LoRA, so it incurs no additional training costs, requires no extra environmental setup, and involves no changes to existing code. This allows for straightforward training and inference, just like LoRA, given the initialized parameters. We plan to release all PiSSA-initialized models from our paper, along with the initialization of future mainstream models, to facilitate easy adoption. For users interested in customizing PiSSA configuration or modifying its core code, we have already integrated PiSSA into HuggingFace’s PEFT library and provide the training scripts used in our paper to replicate the results. Additionally, we will release training logs and checkpoints from our experiments to aid in further comparisons and validations.
Rebuttal 1: Rebuttal: 1. **Contribution** 1. This paper analyzes the gradient of LoRA, showing that $A$ and $B$ initially have zero gradients and random gradient directions, respectively. This leads to slow convergence and might result in suboptimal local minimum points found. 2. We propose the PiSSA initialization method, which approximates the optimization direction of full fine-tuning through fine-tuning the principal components of a model. To our best knowledge, PiSSA is the first study to perform SVD on the original model and use the principal singular values and singular vectors to initialize the adapter and fine-tune them, while the remaining parts are used to initialize the original model and are fixed during training. Experiments show that PiSSA converges faster and achieves better final results compared to LoRA. 3. We combine PiSSA with NF4 quantization to propose QPiSSA, which can reduce quantization error by about 20% compared to QLoRA, while maintaining the fast convergence and good performance of PiSSA. 2. **New experimental results** During this rebuttal period, we conducted numerous new experiments to address the reviewers' concerns. We have listed all the experimental results here and referenced them at the respective points of the reviewers' questions. **Table 1**. The comparisons of PiSSA with LoRA using Gaussian&Kaiming initialization, LoRA+llm.int8, DoRA, AdaLoRA, PiSSA+llm.int8, and PiSSA+AdaLoRA for GSM8K on LLaMA-3-8B. Each experiment was repeated three times, and the average values and standard deviations were recorded. | Method | Run 1 | Run 2 | Run 3 | GSM8K Average | | --- | --- | --- | --- | --- | | Full FT | 74.89 | 74.22 | 74.07 | 74.39±0.356 | | LoRA(gaussian-init) | 71.11 | 71.19 | 70.74 | 71.01±0.199 | | LoRA(kaiming init) | 72.25 | 71.57 | 71.95 | 71.92±0.279 | | LoRA+llm.int8 | 71.78 | 71.48 | 71.78 | 71.68±0.143 | | DoRA | 72.18 | 72.33 | 72.63 | 72.38±0.189 | | AdaLoRA | 72.48 | 72.42 | 72.02 | 72.31±0.202 | | PiSSA | **76.72** | **76.72** | **76.80** | **76.75±0.036** | | PiSSA+llm.int8 | **76.18** | **76.48** | **76.95** | **76.54±0.318** | | PiSSA+AdaLoRA | **78.77** | **78.32** | **78.69** | **78.59±0.199** | **Table 2**: Fine-tuning DeBERTa-v3-base with PiSSA, LoRA (using Gaussian & Kaiming initialization), DoRA, and AdaLoRA on the GLUE benchmark, including 8 subtasks MNLI, SST-2, CoLA, QQP, QNLI, RTE, MRPC, and STS-B. The results for PiSSA, DoRA, and LoRA (with Kaiming initialization) are averaged over three runs. The other results, taken from the AdaLoRA paper, are averaged over five runs, with only the mean values reported. | Method | Run 1 | Run 2 | Run 3 | GLUE Average | | --- | --- | --- | --- | --- | | Full FT | — | — | — | 88.245 | | LoRA(gaussian-init) | — | — | — | 88.503 | | LoRA(kaiming init) | 88.795| 88.665 | 88.395 | 88.618±0.167 | | DoRA | 89.186 | 88.955 | 88.810 | 88.984±0.155 | | AdaLoRA | — | — | — | 89.464 | | PiSSA | **89.915** | **89.783** | **89.711** | **89.803±0.084** | **Table 3**. The gradient direction of the first step for PiSSA and LoRA initialized with five different random seeds, using the same batch of 128 data points, was reduced to two dimensions. LoRA's gradient direction is zero and random, while PiSSA's direction is consistent, related to the principal singular value of the model. | | Method | Seed 0 | Seed 1 | Seed 2 | Seed 3 | Seed 4 | | --- | --- | --- | --- | --- | --- | --- | | grad_A | LoRA | [0,0] | [0,0] | [0,0] | [0,0] | [0,0] | | | PiSSA | **[0,1]** | **[0,1]** | **[0,1]** | **[0,1]** | **[0,1]** | | grad_B | LoRA | [-0.992, 0.122] | [ 0.9525, 0.304] | [ 0.4587, -0.888] | [ 0.241, 0.970] | [ 0.036, -0.999] | | | PiSSA | **[1,0]** | **[1,0]** | **[1,0]** | **[1,0]** | **[1,0]** | **Table 4**. The ratio of the distance moved towards the target point by PiSSA and LoRA after 5 update steps. | Metric | Method | Step 1 | Step 2 | Step 3 | Step 4 | Step 5 | | --- | --- | --- | --- | --- | --- | --- | | Loss | LoRA | **0.8881** | 0.7943 | 0.6598 | 0.6021 | 0.5538 | | | PiSSA | 0.8884 | **0.6476** | **0.4568** | **0.3657** | **0.3346** | | ratio_to_target_A | LoRA | 0.00% | 2.57% | 6.28% | 10.84% | 15.94% | | | PiSSA | **4.31%** | **11.29%** | **17.30%** | **22.77%** | **27.71%** | | ratio_to_target_B | LoRA | 3.31% | 9.66% | 15.75% | 21.51% | 27.00% | | | PiSSA | **4.29%** | **11.27%** | **17.29%** | **22.77%** | **27.72%** | **Table 5**. Using a normal distribution to fit the original model and the residuals model in PiSSA, then recoding the mean and standard deviation of the distribution. | | mu | sigma | | --- | --- | --- | | LLaMA-2-7B | 5.4679e-06 | 0.0193 | | PiSSA-LLaMA-2-7B-r128 | **2.6775e-06** | **0.0172** | | Mistral-7B | 9.8105e-07 | 0.0033 | | PiSSA-Mistral-7B-r128 | **5.3738e-07** | **0.0029** | | gemma-7b | 1.3422e-06 | 0.0045 | | PiSSA-gemma-7b-r128 | **6.9983e-07** | **0.0040** | | LLaMA-3-8B | 5.6969e-06 | 0.0139 | | PiSSA-LLaMA-3-8B-r128 | **2.3858e-06** | **0.0118** |
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models
Accept (poster)
Summary: The paper investigates whether speculative sampling is compatible with watermarking for LLMs. Strengths: S1. This paper is original in the landscape of LLM watermarking. S2. This paper shows an interesting "no-go" theoretical result (Th. 1). S3. This paper proposes 2 designs sustaining either the sampling efficacy or the watermark strength. S4. I like very much the proposed measurement of the watermark strength: the average exponent of the p-value. S5. The presentation is crystal clear. Weaknesses: W1. Speculative sampling. I am not an expert in sampling for LLM. I do not know how "speculative sampling" is key compared to more common methods like nucleus or top-k sampling, which prevents me from judging this paper's impact. W2. More comments on the experimental results Say at least that MWS is much better than MSE in the sense that the MWS loss of sampling efficiency is barely visible, whereas the MSE loss of watermarking strength is significant. The LLMs used in the experimental protocols are old and their entropy is bigger than more recent ones. It might be worth stating that this choice gives high ANLPPT. Technical Quality: 4 Clarity: 3 Questions for Authors: Q1. I understood that ANLPPT equals $-\log(P_n) / n$ where $P_n$ is the measured P-value over $n$ tokens. Which logarithm? What is the typical value of $n$? My experience with the Aaronson scheme is that the score fluctuates a lot. A median might be a more reliable statistic than a mean over token. Q2. I am very surprised that DeltaGumbel is way better than Gamma. Is this due to Q3. Line 33: "Unbiased watermarking schemes [12] have been developed." Well, Aaronson [1] was the first person to introduce this concept, isn't it? Q4. It is quite curious that no citation or reference is given for "DeltaGumbel" and "Gamma" schemes. DeltaGumbel is known as Aaronson [1] (everywhere in the literature), and Gamma looks like ITS from Kuditipudi [15]. Isn't it? Q5. Some details of the experimental protocol are missing. I suspect the measurements are done on an "idealized" setup where the secret key changes at random from one token to another, and the detector knows this. This is not realistic as it is absolutely not robust. To be practical, one has to make the secret key dependent on the previous tokens (see Kirchenbauer [13, 14]). This might hurt speculative sampling since a rejection implies a recomputation of the hash. Moreover, repeated token blocks need to be skipped at the detection side (see Fernandez [8]); otherwise the p-value is incorrect. This hurts the ANLPPT. I don't believe these tweaks modify the general conclusions of this work, but this "idealized" setup should be clearly stated, with the implications (ANLPPT and AATPS are lower in practice). Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: No limitation is given. A limitation about the lack of practicality of the experimental protocol would be welcome. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thrilled to receive such a meticulous and knowledgeable review of our research. It is a privilege to have our paper assessed by a reviewer with such extensive knowledge in the landscape of LLM watermarking. Your acknowledgment of the originality of our work is truly rewarding. In the following sections, we will respond to your inquiries and feedback. > how "speculative sampling" is key compared to more common methods like nucleus or top-k sampling Speculative sampling and nucleus or top-k sampling operate at different design levels and are not inherently conflicting. They can be used simultaneously. For instance, the seminal paper on speculative sampling, "Accelerating Large Language Model Decoding with Speculative Sampling" [5], mentions that "With standard sampling methods such as nucleus, top-k sampling and adjusting temperature, we can modify the probabilities accordingly before applying this rejection sampling scheme." > More comments on the experimental results Say at least that MWS is much better than MSE in the sense that the MWS loss of sampling efficiency is barely visible, whereas the MSE loss of watermarking strength is significant. The LLMs used in the experimental protocols are old and their entropy is bigger than more recent ones. It might be worth stating that this choice gives high ANLPPT. We greatly appreciate your suggestion and are eager to include more discussion of the experimental results using the additional page in the camera ready version. We concur that MWS exhibits a smaller performance loss compared to VSpS and is highly practical. In fact, we strongly recommend using MWS in practice. We will elaborate on the impact of different models on entropy, and how entropy influences watermark strength using the additional page. > Q1. I understood that ANLPPT equals $-\log(P_n) / n$ where $P_n$ is the measured P-value over $n$ tokens. Which logarithm? What is the typical value of $n$? My experience with the Aaronson scheme is that the score fluctuates a lot. A median might be a more reliable statistic than a mean over token. The logarithm used here is the natural logarithm. $n$ is the number of tokens in the generated sentences. We generate many sentences with a max_length of 128, but they may be shorter due to early stopping. We can calculate the median, introducing the Median Negative Log P-value Per Token (MNLPPT). Text summarization task with LLaMa-7b model as target model and LLaMa-68m model as reference model: | K | method | reweight | n | ANLPPT(U Score) | MNLPPT(U Score) | ANLPPT(maximin-LLR) | MNLPPT(maximin-LLR) | |---|--------|----------|---|-----------------|-----------------|---------------------|---------------------| | 1 | Basic | No Reweight | 122.0±0.7 | 0.0±0.0 | 0 | 0.0±0.0 | 0 | | 1 | VUW | DeltaGumbel | 121.0±0.8 | **0.376±0.009** | **0.385** | **1.71±0.03** | **1.933** | | 1 | VUW | Gamma | 121.9±0.7 | **0.097±0.002** | **0.098** | **0.272±0.005** | **0.333** | | 1 | VSpS | No Reweight | 122.6±0.7 | 0.0±0.0 | 0 | 0.0±0.0 | 0 | | 1 | MSE | DeltaGumbel | 121.5±0.8 | 0.153±0.004 | 0.141 | 0.640±0.014 | 0.660 | | 1 | MSE | Gamma | 121.9±0.7 | 0.0433±0.0012 | 0.036 | 0.0605±0.0019 | 0.054 | | 1 | MWS | DeltaGumbel | 121.3±0.8 | **0.374±0.009** | **0.380** | **1.71±0.03** | **1.921** | | 1 | MWS | Gamma | 121.9±0.7 | **0.098±0.002** | **0.100** | **0.275±0.005** | **0.335** | | 2 | VSpS | No Reweight | 122.7±0.7 | 0.0±0.0 | 0 | 0.0±0.0 | 0 | | 2 | MSE | DeltaGumbel | 122.5±0.7 | 0.111±0.003 | 0.095 | 0.419±0.010 | 0.403 | | 2 | MSE | Gamma | 122.4±0.7 | 0.0322±0.0010 | 0.024 | 0.0310±0.0014 | 0.021 | | 2 | MWS | DeltaGumbel | 121.4±0.8 | **0.374±0.009** | **0.379** | **1.71±0.03** | **1.913** | | 2 | MWS | Gamma | 122.8±0.7 | **0.096±0.002** | **0.097** | **0.272±0.005** | **0.332** | | 3 | VSpS | No Reweight | 121.4±0.8 | 0.0±0.0 | 0 | 0.0±0.0 | 0 | | 3 | MSE | DeltaGumbel | 121.4±0.8 | 0.094±0.003 | 0.079 | 0.331±0.009 | 0.306 | | 3 | MSE | Gamma | 121.8±0.7 | 0.0281±0.0009 | 0.020 | 0.0214±0.0012 | 0.011 | | 3 | MWS | DeltaGumbel | 121.2±0.8 | **0.374±0.009** | **0.380** | **1.70±0.03** | **1.919** | | 3 | MWS | Gamma | 122.3±0.7 | **0.097±0.002** | **0.098** | **0.274±0.005** | **0.335** | | 4 | VSpS | No Reweight | 122.5±0.7 | 0.0±0.0 | 0 | 0.0±0.0 | 0 | | 4 | MSE | DeltaGumbel | 122.2±0.7 | 0.083±0.002 | 0.067 | 0.280±0.008 | 0.249 | | 4 | MSE | Gamma | 122.6±0.7 | 0.0258±0.0008 | 0.018 | 0.0167±0.0011 | 0.007 | | 4 | MWS | DeltaGumbel | 121.1±0.8 | **0.375±0.009** | **0.380** | **1.71±0.03** | **1.923** | | 4 | MWS | Gamma | 122.2±0.7 | **0.096±0.002** | **0.097** | **0.271±0.005** | **0.331** | --- Rebuttal 2: Title: Rebuttal by Authors (Continue) Comment: > Q2. I am very surprised that DeltaGumbel is way better than Gamma DeltaGumbel devotes all the entropy to watermarking, resulting in an ANLPPT approximately equal to the language model's entropy. Gamma has a weaker watermarking strength, adding at most 1 bit of watermark per step, so the ANLPPT cannot exceed $\log(2)$ and is significantly smaller than DeltaGumbel. > Q3. Line 33: "Unbiased watermarking schemes [12] have been developed." Well, Aaronson [1] was the first person to introduce this concept, isn't it? We fully acknowledge the seminal and pioneering work of Aaronson[1]. It achieves an unbiased distribution for each token. However, Aaronson[1]'s method is not commonly referred to as an unbiased watermark because it still incurs some performance loss, as evidenced in "Mark My Words: Analyzing and Evaluating Language Model Watermarks". Although the distribution for each token is unbiased, the watermarks at different token positions may correlate when watermarking the entire sequence, leading to performance degradation. This issue is tackled in follow-up work like [12], which ensures an unbiased distribution not only for each token but also for the entire sequence. > Q4. It is quite curious that no citation or reference is given for "DeltaGumbel" and "Gamma" schemes. DeltaGumbel is known as Aaronson [1] (everywhere in the literature), and Gamma looks like ITS from Kuditipudi [15]. Isn't it? You are correct regarding Aaronson [1]. However, Gamma differs from Kuditipudi[15] and originates from [12]. Their details have been moved to Section D due to space limitations. We will add the citation and add more explanation in the main paper. > To be practical, one has to make the secret key dependent on the previous tokens (see Kirchenbauer [13, 14]). We have already implemented this in our code. In `lm.py`, the `step_watermark` function handles this logic. > This might hurt speculative sampling since a rejection implies a recomputation of the hash. In our implementation, there is no hash recomputation triggered by rejection, as long as we carefully pass the computed hash results. The `mc_watermark.py` code implements this logic, ensuring that `step_watermark` is called at most once after obtaining the draft tokens, only when all draft tokens are accepted. Moreover, the hash computation cost is relatively low compared to the LLM's computation. > Moreover, repeated token blocks need to be skipped at the detection side (see Fernandez [8]); We have accounted for this in our code. The `detect_pre` function in `lm.py` implements this logic, using the `skipped` variable to determine whether to skip. > Q5. Some details of the experimental protocol are missing. I suspect the measurements are done on an "idealized" setup where the secret key changes at random from one token to another, and the detector knows this. > I don't believe these tweaks modify the general conclusions of this work, but this "idealized" setup should be clearly stated, with the implications (ANLPPT and AATPS are lower in practice). > A limitation about the lack of practicality of the experimental protocol would be welcome. As explained above, our experiments are not "idealized," and the calculated ANLPPT and AATPS are not distorted (except for the changes in ANLPPT due to the change in entropy caused by using a different model, as discussed earlier). We will provide additional explanations of the experimental protocol, detailing how we carefully handled these aspects. We sincerely appreciate your thoughtful comments and suggestions, especially the supplementary explanations regarding entropy and the experimental protocol. We are delighted to engage in such a positive academic exchange with the reviewer and eagerly await your feedback on whether our explanations have addressed your concerns. Please feel free to raise any further inquiries; we are happy to answer all questions. We look forward to your valuable feedback. --- Rebuttal Comment 2.1: Comment: I acknowledge that I have read the rebuttal. The authors took great care to answer my questions. I confirm my grade. My comments below are just out of curiosity. > You are correct regarding Aaronson [1]. However, Gamma differs from Kuditipudi[15] and originates from [12]. You are right. I got confused between the Gamma (original) and the Delta (very similar to Kuditipudi) schemes of [12]. > We fully acknowledge the seminal and pioneering work of Aaronson[1]. It achieves an unbiased distribution for each token...This issue is tackled in follow-up work like [12]. I have difficulty understanding the difference. Aaronson [1] and Hu [12] achieve an unbiased distribution for each token, and both of them use hashing of previous tokens to refresh the key. So, I do see why [12] is unbiased, but I do not see why [1] is not. --- Rebuttal 3: Comment: Thank you very much for reading our rebuttal and for your prompt response. We are happy to participate in further discussion on the difference between achieving an unbiased distribution for each token and an unbiased distribution for the entire sequence. To illustrate this difference, let's consider a thought experiment. Suppose we have a prompt that says: `Continuously generate uniformly distributed random 01 numbers. Output in a specific format:\nA new random 01 number: 1\nA new random 01 number: 0\nA new random 01 number: 1` Assume we have an LLM that is powerful enough to generate a perfect output distribution that fully reflects the prompt. In this case, the entropy would be 0 for the "\nA new random 01 number: " part and 1 bit for the random 01 variable. We can measure the quality by the absolute difference between the number of 0s and 1s after generating 100 random 01 numbers. Without watermarking, the generated 0s and 1s would be nearly uniform, with only small fluctuations. However, when watermarking is introduced, the situation becomes more interesting. Since the previous tokens are always "\nA new random 01 number: " for each line, the key used in watermarking will be the same every time a random 01 variable is generated. This will result in consistently outputting either 0 or 1, leading to a large difference between the number of 0s and 1s. To address this issue, [12] introduced the principle that each watermarking operation should use an independent key. If independence from previous watermarks cannot be guaranteed, the watermark must be skipped, that is stop adding new watermark until an independent key is obtained. The above example amplifies the difference between achieving an unbiased distribution for each token and an unbiased distribution for the entire sequence. In general use, this difference exists but is relatively small. We appreciate the opportunity to engage in this positive academic discussion and hope our explanation provides clarity on the nuances between token-level and sequence-level unbiased distributions. --- Rebuttal Comment 3.1: Comment: Ok, thanks for the explanation.
Summary: This paper explores the inherent trade-off between watermark strength and speculative sampling efficiency in large language models. A no-go theorem is presented, proving that it is impossible to maintain the highest watermark strength and sampling efficiency simultaneously. This paper also proposes a framework called the "two reweight framework" and develops two practical methods that focus on either maintaining watermark strength or sampling efficiency. Strengths: + **New framework**. The proposed framework allows for the integration of unbiased watermarking and speculative sampling techniques without altering the output distribution, thereby improving generation efficiency. + **Theoretical proof**. This paper rigorously proves a no-go theorem that demonstrates when the vocabulary size exceeds two, it is impossible to maintain both watermark strength and sampling efficiency simultaneously. Weaknesses: - **Limited experimental dataset and models and experimental coverage**. The experiments are conducted only on specific datasets (e.g., CNN_DAILYMAIL) and models (e.g., Llama-7b and Llama-68m). Additional benchmarks on different datasets and models would strengthen the generalizability of the findings. The experiments only cover a few tasks (text summarization and open-ended text generation) - **Algorithm clarity**. The pseudo-code provided for the algorithms could be further detailed, with clearer explanations for each step to improve reproducibility. - **Lack of analysis on the robustness of watermarking**. "On the Reliability of Watermarks for Large Language Models" mentions paraphrasing attacks, and copy-paste attacks, etc. Could this article potentially evaluate the robustness of the watermark under the two reweight framework? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. **Impact on the quality of the text**. I am intrigued by the potential impact on the quality of generated text when adjusting the balance between watermark strength and sampling efficiency. Apart from LOGPPL, are there alternative evaluation metrics such as ROUGE or BLEU utilized for assessment? 2. **Selection of target model and draft model**. The paper uses Llama-7b as the target model and the Llama-68m as the draft model. How are the target model and draft model selected for the article? Why did the paper choose the Llama series of models? Are there any specific requirements for their selection? 3. **Choice of different draft sequence length K**. Regarding line 613, the choice of draft sequence length is critical in the deployment. Regarding Figure 2, when K is chosen from 1 to 4, a larger value of K (such as K = 4) generally demonstrates better performance in both reweighting methods. This implies that selecting a larger K value can enhance sampling efficiency while maintaining watermark strength. Consequently, if K were to be increased even further, how would the effects be? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations and potential negative societal impacts in Appendix F and G. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for spending time reviewing our work. However, I am afraid that your 4-point rating may be based on incorrect premise. We would like to clarify a misunderstanding in your comment. You claimed that our paper only used Llama-7b and Llama-68m models. If you read lines 273-278, you will find that this is not the case. We have explained that we considered language models of different sizes, including Llama-7b and Llama-13b as the target models and Llama-68m as the draft model. We understand that in your opinion, our contributions only reach the level of '2: fair.', falling short of your expectations for a good contribution. However, our paper is the first study to investigate the inherent trade-off between watermark strength and speculative sampling. The new framework we propose lays a solid foundation, our no-go theorem provides valuable theoretical insights, and our two novel algorithms represent concrete advancements in techniques. Regarding the weaknesses you mentioned, we are willing to make improvements to meet your expectations. To do so, we kindly request more specific suggestions. 1. **In your opinion, are the current experiments sufficient or insufficient to verify our findings? If insufficient, could you please let us know the scale of experiments you would consider adequate for verifying our findings?** Although our findings are already proven in Theorems 1, 5, and 6, and we have already invested 1200 A6000 GPU hours (~$1k) to verify them, we understand that the reviewer still considers the experiments limited. However, we are unclear about the specific reasons that lead the reviewer to expect a further increase in experimental scale (and cost), despite that our findings have been proven to hold. We would be grateful if the reviewer could inform us, when listing experiments as a weakness, does the reviewer regard the current experiments as sufficient or insufficient to verify our findings, and if insufficient, what scale of experiments would the reviewer consider adequate to verify our mathematically-guaranteed findings. 2. **Regarding your concern about "Algorithm clarity," do you find any part of the algorithms difficult to understand, or do you simply anticipate that other readers might need clearer explanations?** We are determined to help readers understand our algorithms and make the results reproducible. In Algorithms 1-4, we have provided all the algorithms used in the paper, including every step of the procedure and the specific calculation for each value. Regarding reproducibility, we have provided all the code in OpenReview. If you find any part of the algorithms difficult to understand, we kindly request that you indicate which parts require further elaboration. We will strive to provide targeted explanations to help improve your understanding. If you simply anticipate that other readers might need clearer explanations, we would also appreciate it if you could point out which specific parts of the presentation could be improved for clarity. We will strive to provide targeted explanations to enhance the presentation. 3. **Regarding robustness, what specific research question do you expect us to investigate?** In the rebuttal below, we will provide an analysis based on the close relationship between watermark strength and robustness. The excellent work "On the Reliability of Watermarks for Large Language Models," you cited reveals the essence that "Attacks dilute the watermark strength." Therefore, robustness and watermark strength are closely related. Robustness measures how much dilution the watermark strength can withstand while still maintaining detectable (e.g., a certain AUC). "On the Reliability of Watermarks for Large Language Models" addresses the research question "Can diluted watermarks still be detected as long as they are sufficiently long?" Our work primarily focuses on the research question of "Is it possible to accelerate the generation of watermarked content?". We understand that the reviewer expects us to investigate robustness-related research questions. However, we would appreciate it if you could clarify the specific research questions the reviewer expect us to address. We seek the reviewer's understanding that we are asking for specific expectations rather than directly supplementing experiments. This is because without clear experimental suggestions as guidance, we may conduct experiments that the reviewer considers irrelevant or repetitive, wasting valuable computational resources and funding(, considering that the experiment cost of this paper is already 1200 A6000 GPU hours, ~$1k). Only by clarifying the reviewer's expectations can we design and conduct targeted experiments. Next, we will address the reviewer's questions. --- Rebuttal 2: Title: Rebuttal by Authors (Continue) Comment: Here, we address the reviewer's questions. > "On the Reliability of Watermarks for Large Language Models" mentions paraphrasing attacks, and copy-paste attacks, etc. Could this article potentially evaluate the robustness of the watermark under the two reweight framework? "On the Reliability of Watermarks for Large Language Models" recruited human subjects to collect hand-written passages for both paraphrasing attacks and copy-paste attacks to study watermark robustness, which is costly. Since the goal of our paper is to contribute to improving the speed of generating watermarked content, we cannot afford to invest significant time and financial resources in evaluating watermark robustness, as that paper did. Instead, we focus on the unique contribution of this paper: accelerating the generation of watermarked text. We can provide a brief theoretical analysis: - For the Maintain Watermark Strength method, since $\widehat{R}_{E}(P)=R_{E}(P)$, the robustness is the same as the existing method, Vanilla Unbiased watermark. - For the Maintain Sampling Efficiency method, since the watermark strength is lower compared to Vanilla Unbiased watermark, the watermark strength after content editing is also lower. Therefore, the robustness is lower than the existing method, Vanilla Unbiased watermark. We believe that our paper and "On the Reliability of Watermarks for Large Language Models" both have unique contributions, each focusing on different directions, and both are original, novel, and significant. > I am intrigued by the potential impact on the quality of generated text when adjusting the balance between watermark strength and sampling efficiency. We emphasize multiple times in the paper that it does not affect the quality. If you read Theorems 5 and 6, you will find a theoretical guarantee that the generation distribution is unbiased. Therefore, the expectation of any metric remains unchanged. Thank you for telling us that you are intrigued by the potential impact on the quality of generated text, but the answer is that there is no impact. > Apart from LOGPPL, are there alternative evaluation metrics such as ROUGE or BLEU utilized for assessment? Yes, there are other evaluations, such as ROUGE, BLEU, METEOR, GLEU, MAUVE, SQuAD, and their various variants. In case the reviewer regards measuring certain scores necessary to verify our findings, even though Theorems 5 and 6 have already provided theoretical guarantees, please let us know. > The paper uses Llama-7b as the target model and the Llama-68m as the draft model. I would like to remind you once again that the above statement is wrong. We not only use Llama-7b as the target model but also a larger model, Llama-13b. Computational resource limitations prevent us from using even larger models. > How are the target model and draft model selected for the article? Why did the paper choose the Llama series of models? Are there any specific requirements for their selection? There are no special selection requirements other than being an autoregressive language model. We chose the Llama series because it is a popular baseline in the community, as seen in [25,35,46], for example. > Consequently, if K were to be increased even further, how would the effects be? When K (the number of draft tokens) increases: 1. The number of accepted tokens also increases, leading to faster generation. 2. However, the computational overhead of the draft model and target model rises, causing slower generation. The average time to generate a token is determined by the competition between these two factors. The interplay between these two factors is illustrated through quantitative results in Section H of our paper. Once again, thank you for the time and effort you have put into reviewing our work. We sincerely hope to engage in positive academic exchanges with the reviewer. In order to meet the reviewer's expectations, we earnestly request that the reviewer provide specific expectations by answering the three questions outlined above. If there are any additional concerns, please feel free to ask, and we will be happy to answer all questions. --- Rebuttal Comment 2.1: Comment: Thanks for the response. I've raised my score. --- Reply to Comment 2.1.1: Comment: Thank your very much for going through our response and raising the score. We appreciate your prompt response.
Summary: This paper studies the trade-offs between sampling efficiency and watermark strength to see if LLMs can generate watermarked output efficiently. It is proven in this work that it is not possible to simultaneously maintain the highest watermark strength and the highest sampling efficiency. Therefore, upon the no-go theorem, the paper provides two methods to maintain either one of them and conducts experiments to validate the effectiveness methods. Strengths: 1. This paper provides the first study into the relationship between sampling efficiency and watermark strength, which is a practical common interest. 2. This paper provides proof of the no-go theorem that it is not possible to simultaneously maintain the highest watermark strength and the highest sampling efficiency. 3. From experiments, the effectiveness of proposed methods are validated clearly with visualizations. 4. Figure 1 provides an overview of the paper, which is clear and informative. Weaknesses: 1. Experiment section is relatively short and can include more analysis of the proposed methods on aspects such as ablation study. Technical Quality: 3 Clarity: 3 Questions for Authors: Is the no-go theorem also true for other watermarking algorithm such as KGW[1]? [1] A watermark for large language models Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Although the experiments demonstrated the no-go theorem and effectiveness of proposed methods, maybe more analysis can be provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and recognizing our research as the first study into the relationship between sampling efficiency and watermark strength. Regarding the experiment section, we agree that it appears to be relatively short in the main paper due to space limit. To provide a more comprehensive evaluation, we have moved additional experiments to Appendix H, where we consider different models and tasks. To summarize, all conclusions in Section 6 are still maintained in these experiments: the MWS method achieves the same watermark strength as the VUW method, while the MSE method maintains the same sampling efficiency as the VSpS method, all without compromising the output quality. Furthermore, our experiments allow us to observe the impact of entropy on watermark strength. We consider two tasks: open-ended text generation, which has higher entropy, and text summarization, which has lower entropy. We measure watermark strength using ANLPPT and find that it is higher for the open-ended text generation task. We also observe the influence of model size, with larger models having a slower Per Token Time (PTT) and different target model-draft model pairs resulting in different Speculative efficiency. Overall, our experiments serve as a validation study, confirming that the properties stated in the Theorem 1,5,6 are indeed observed in practice. The total experimental cost amounts to 1,200 A6000 GPU hours. > Is the no-go theorem also true for other watermarking algorithm such as KGW[1]? Since KGW is not an unbiased watermark, our no-go theorem does not directly apply. According to [2,3], KGW leads to a decrease in generation quality. Although KGW is not an unbiased watermark, the intuition guided by unbiased watermarks may still be applicable to biased watermarks, but the precise formulation remains unclear. The theorem cannot simply discard the unbiased reweight constraint, as it would weaken the condition and invalidate the conclusion. Characterizing non-trivial biased watermarks and proving a corresponding no-go theorem could be an interesting extension and future work. > maybe more analysis can be provided We would be happy to include additional discussions on the experimental results using the additional page in the camera-ready version. Here are some details we would like to add if space permits: We have observed that MWS has smaller performance loss compared to VSpS, making it highly practical. In fact, we strongly recommend using MWS in practice. We will also state the impact of different models on entropy and the influence of entropy on watermark strength. If you have any further questions or concerns, please do not hesitate to ask. We are more than happy to address all your inquiries. Considering our additional explanations for the experiments, we would be greatly appreciative if you could re-evaluate our work. Thank you once again for your valuable feedback. [1] A watermark for large language models [2] Mark My Words: Analyzing and Evaluating Language Model Watermarks [3] Unbiased Watermark for Large Language Models --- Rebuttal Comment 1.1: Comment: Thank you for your responding. i wil maintain my rating.
Summary: This paper proposes it is impossible to maintain the highest watermark strength and sampling efficiency simultaneously for content generation by considering integrating an unbiased watermarking method [1] and speculative sampling strategy [2] [3], where they provide rigorous theoretical analysis and empirical results. [1] Hu, Zhengmian, et al. "Unbiased watermark for large language models." arXiv preprint arXiv:2310.10669 (2023). [2] Leviathan, Yaniv, Matan Kalman, and Yossi Matias. "Fast inference from transformers via speculative decoding." International Conference on Machine Learning. PMLR, 2023. [3] Chen, Charlie, et al. "Accelerating large language model decoding with speculative sampling." arXiv preprint arXiv:2302.01318 (2023). Strengths: This paper focuses on an interesting direction integrating the watermarking method and speculative sampling to accelerate the sampling efficiency while maintaining the watermark strength. It helps us understand the interactions and tradeoffs between watermarking and sampling for content generation in LLM, which is significant. Weaknesses: 1. Why do the authors mean by "naively applying speculative sampling to a watermarked target distribution may significantly reduce the overlap probability with the draft distribution Q." as in the 112-nd line? 2. In Fig.2 (a) (b), for MWS, the sampling efficiency (AATPS) is only a little smaller than that of VSpS and MSE. For example, for K=2 in (a) in terms of U score, the AATPS of MWS is smaller than that of VSpS or MSE by about less than 0.1, which should be acceptable since it achieves comparable watermark strength with VUW. It seems inconsistent with the claim that simultaneously accelerating the sampling efficiency while maintaining the watermark strength is impossible. May the authors explain this result? Technical Quality: 3 Clarity: 4 Questions for Authors: Please refer to the *Weakness* part. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and for recognizing the significance of our work. We are grateful for the opportunity to address your questions. > Why do the authors mean by "naively applying speculative sampling to a watermarked target distribution may significantly reduce the overlap probability with the draft distribution Q." as in the 112-nd line? The overlap probability is defined as $\alpha(R_E(P),Q)$, where $R_E(P)$ is the watermarked target distribution and $Q$ is the draft distribution. Note that the total variation distance is a convex function, and $\mathrm{TV}(P,Q)=1-\alpha(P,Q)$. Applying Jensen's inequality, we have: $\mathbb{E}_E[\alpha(R_E(P),Q)]\leq\alpha(\mathbb{E}_E[R_E(P)],Q)=\alpha(P,Q)$ In practice, the gap in Jensen's inequality can be quite large, leading to a significant reduction in the average overlap probability. > In Fig.2 (a) (b), for MWS, the sampling efficiency (AATPS) is only a little smaller than that of VSpS and MSE. For example, for K=2 in (a) in terms of U score, the AATPS of MWS is smaller than that of VSpS or MSE by about less than 0.1, which should be acceptable since it achieves comparable watermark strength with VUW. We agree with your observation that the sampling efficiency (AATPS) of MWS is only slightly lower than that of VSpS and MSE. We believe that MWS is highly suitable for practical use as it maintains watermark strength while achieving performance close to VSpS. > It seems inconsistent with the claim that simultaneously accelerating the sampling efficiency while maintaining the watermark strength is impossible. May the authors explain this result? We would like to clarify that our findings do not contradict the no-go theorem. Even though the sampling efficiency of MWS is nearly as high as that of VSpS, there is still a statistically significant gap. The specific data can be found in Table 1, where the AATPS for MWS is 1.773 ± 0.003, and for VSpS, it is 1.857 ± 0.003. The no-go theorem states that theoretically, there will always be a gap, and our experiments demonstrate that this gap is small in practice. The two are not contradictory. If you have any further questions or concerns, please do not hesitate to ask. We are more than happy to address all of your inquiries. We would also like to emphasize that our work is not only significant but also highly original. We propose simultaneously accelerating sampling while incorporating watermarks, introduce the no-go theorem, and present two novel algorithms, MWS and MSE. All of these contributions are the first of its kind. Considering our additional explanations, we would be extremely grateful if you could re-evaluate our work. Thank you once again for your valuable feedback and for your time in reviewing our paper.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Sample-Communication Complexity Trade-off in Federated Q-Learning
Accept (oral)
Summary: This paper addresses the challenge of Federated Q-learning, focusing on the trade-off between sample complexity and communication complexity. Federated Q-learning involves multiple agents collaboratively learning the optimal Q-function for an infinite horizon Markov Decision Process (MDP) with finite state and action spaces. The paper proves the lower bound result on the number of rounds, which shows that linear speedup in sample complexity with respect to the number of agents requires at least $\Omega(\frac{1}{1-\gamma})$ rounds of communication. The second contribution is the algorithm that shows that this bound is tight. Strengths: The authors consider the problem interesting and well-motivated. The findings establish a good understanding of tradeoffs in Federated Q-learning. The results are well-presented, and the theoretical part of the lower and upper bound looks solid. Weaknesses: While the main focus of this work is theoretical, the paper could benefit from the experimental evaluation of the algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: There are also studies on distributed multi-armed bandits, such as 'Parallel Best Arm Identification in Heterogeneous Environments' and 'Communication-efficient Collaborative Best Arm Identification,' which are relevant to Q-learning and RL problems. Could you elaborate on the main differences in techniques used in these studies? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your constructive feedback. We appreciate the time and effort you spent on our paper and the helpful comments you provided. Please find our itemized responses to your questions below. - _While the main focus of this work is theoretical, the paper could benefit from the experimental evaluation of the algorithm._ Based on your suggestion, we performed some empirical studies and have included the results in the rebuttal. We consider a MDP with 3 states, namely $\{0,1,2\}$ and 2 actions, where the reward and transition kernel of states $0$ and $1$ are identical to those in the MDP outlined in Appendix B.1 in the paper. The reward and transition kernel for state $2$ is identical to that of state $1$. The values of $\gamma$ and $p$ are set to $0.9$ and $0.8$ respectively. We perform two empirical studies. In the first study, we compare the proposed algorithm Fed-DVR-Q to the Fed-SynQ algorithm proposed in Woo et al., 2023. The parameters for both the algorithms were set to the suggested values in the respective papers. As evident from Fig 1, Fed-DVR-Q achieves a smaller error than Fed-SynQ in the same sample budget. Similarly, Fed-DVR-Q also requires much lesser communication (measured in number of bits transmitted) than Fed-SynQ demonstrating the effectiveness of the proposed approach and corroborating our theoretical results. In the second experiment, we study the effect of the number of agents on the sample and communication complexity of Fed-DVR-Q. The sample complexity decreases as $1/M$ demonstrating the linear speed-up while the communication complexity is independent of the number of agents. Both these results confirm our theoretical findings. Thank you for your suggestion. We will add the empirical results in the final version of our paper. - _There are also studies on distributed multi-armed bandits, such as 'Parallel Best Arm Identification in Heterogeneous Environments' and 'Communication-efficient Collaborative Best Arm Identification,' which are relevant to Q-learning and RL problems. Could you elaborate on the main differences in techniques used in these studies?_ Thank you for pointing this additional related work. Both the studies mentioned by the reviewer focus on best-arm identification in bandits, which is different problem from learning the optimal Q-function in RL using Q-learning due the markovian structure of the responses and the different objective functions. As a result, both the algorithmic design and analysis are quite different in these papers from those in our work. The lower bound in [1] is established using the heterogeneity across clients. On the other hand, the lower bound in our work is based on the bias-variance trade-off of Q-learning. Similarly, the algorithm design in both [1] and [2] are based on arm-elimination approaches which is different from the variance reduced stochastic fixed point iteration used in our work. We will add a discussion on this in the final version of the paper. [1] Nikolai Karpov and Qin Zhang, "Parallel Best Arm Identification in Heterogeneous Environments" [2] Nikolai Karpov and Qin Zhang, "Communication-efficient Collaborative Best Arm Identification" --- Rebuttal Comment 1.1: Title: Response to Reviewer Comment: Thank you once again for taking out time to review our paper. We hope that our rebuttal satisfactorily addressed all your concerns. If you have any additional or follow-up questions based on the rebuttal, we would be happy to answer them!
Summary: This paper discusses the sample and communication complexity of federated tabular Q-learning. The main contributions can be summarized as follows. First, the paper provides a lower bound on the communication cost to guarantee a linear speed-up with respect to the number of agents. Then, it proposes a novel Federated Q-learning algorithm, called Fed-DVR-Q, which simultaneously achieves optimal order sample and communication complexities. Strengths: S1. The paper provides a lower bound in terms of communication complexity. This would be helpful to the community. S2. The work provides a novel algorithm that incorporates the variance reduction technique. It is shown that this algorithm has optimal order from both the sample complexity and communication complexity perspectives and achieves a linear speedup in terms of the number of agents. Weaknesses: W1. Both the lower and upper bounds only apply to the case of synchronous Q-learning with IID samples of the $(s_k, a_k, r_k, s_{k + 1})$ sequence at each agent. Moreover, it only applies to the tabular setup. Technical Quality: 4 Clarity: 4 Questions for Authors: Q1. Can you quantify the benefit of variance reduction in the upper bound? That is, what would the sample complexity be if we looked at a variant of your algorithm without the variance reduction part of the update rule? Doesn't the variance reduction technique typically lead to an improvement in terms of the constants in the sample complexity? I am a bit surprised to see that variance reduction is needed even to guarantee **order** optimal sample and communication complexities. Q2. Typically, to get convergence to an $\epsilon$-neighbourhood, the stepsize $\eta$ should be chosen depending on $\epsilon.$ For example, see Theorem 2 (a) in [BRS18]. However, in your Theorem 2, it appears $\eta$ can be any number in $(0, 1).$ This seems a bit surprising. Can you elaborate on why that is the case? Am I overlooking something? [BRS18]: Bhandari, J., Russo, D. and Singal, R., 2021. A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation. Operations Research, 69(3), pp.950-973. Q3. Also, it is unclear to me why you claim that your upper bound matches the lower bound. The lower bound is in terms of $N,$ while the upper bound is in terms of $\epsilon?$ Can you formally show that the two orders match? Q4. Finally, assuming your lower and upper bounds match, can you explain whether the proposed Fed-DVR-Q is a parameter-free algorithm? That is, does it need any knowledge of the unknown parameters of the underlying MDP to achieve order-optimal sample complexity? I would be happy to increase my score based on your response to my above question. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, the authors have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your constructive feedback. We appreciate the time and effort you spent on our paper and the helpful comments you provided. Please find our itemized responses to your questions below. Q1. Can you quantify the benefit of variance reduction in the upper bound? That is, what would the sample complexity be if we looked at a variant of your algorithm without the variance reduction part of the update rule? Doesn't the variance reduction technique typically lead to an improvement in terms of the constants in the sample complexity? I am a bit surprised to see that variance reduction is needed even to guarantee order optimal sample and communication complexities. A1. For Q-learning, it is well-known that some form of variance reduction presents in all existing algorithm designs that achieve the optimal sample complexity with respect to $\gamma$. In [LCCWC23], the authors demonstrated that vanilla Q-learning, i.e., without variance reduction, has a sample complexity that is necessarily sub-optimal by a factor of $1/(1-\gamma)$. Thus, some form of variance reduction is crucial to achieve the optimal sample complexity. In absence of variance reduction, our algorithm will achieve a sample complexity that is greater than the current one by a factor of $1/(1-\gamma)$ (along with logarithmic factors). Q2. Typically, to get convergence to an $\epsilon$-neighbourhood, the stepsize $\eta$ should be chosen depending on $\epsilon.$ For example, see Theorem 2 (a) in [BRS18]. However, in your Theorem 2, it appears $\eta$ can be any number in $(0, 1).$ This seems a bit surprising. Can you elaborate on why that is the case? Am I overlooking something? A2. The Theorem 2(a) in [BRS18] is true when the learner updates the Q-function (or the parameter $\theta$ in their case) after *each* data point/observation from the environment. In other words, the mini-batch size is $1$. On the other hand, our algorithm takes a mini-batch ($\gg 1$) of samples, collates the information and then updates the Q-function. In both the approaches, the fundamental motivation is to ensure that the variance of stochastic updates is small. In [BRS18], the authors use an update with a large variance and balance it by choosing a small step size. In our work, we allow larger step sizes but require updates with smaller variance (obtained through mini-batching) to ensure updates with low variance. The dependence of $\varepsilon$ is directly through the number of epochs $K$ and indirectly through the choice of $B$ and $J$, as they depend on the parameter $K$. Q3. Also, it is unclear to me why you claim that your upper bound matches the lower bound. The lower bound is in terms of $N,$ while the upper bound is in terms of $\epsilon?$ Can you formally show that the two orders match? A3. The variable $N$ in Theorem 1 is the sample complexity of the algorithm and its relation with the error $\varepsilon$ can be obtained through equations (4) and (5). Our Theorem 1 states that if the number of communication rounds in algorithm is $\mathcal{O}(\frac{1}{1-\gamma})$ (upto logarithmic factors), then the error of the final output is $\Omega(1/\sqrt{N})$, where $N$ is the number of samples taken for each state-action pair at each agent. In other words, in order to obtain $\varepsilon$-optimal Q-function, each agent needs to take at least $\Omega(1/\varepsilon^2)$ samples, i.e., the algorithm offers no linear speed up w.r.t. the number of agents. This is equivalent to saying that if any algorithm is designed such that it only takes $\mathcal{O}(1/M\varepsilon^2)$ samples per agent (or offers _any_ speed-up w.r.t. number of agents), then it must have at least $\Omega(\frac{1}{1-\gamma})$ rounds of communication. Our Theorem 2 states that our proposed algorithm Fed-DVR-Q is such that it takes $\mathcal{O}(1/M\varepsilon^2)$ samples per state-action pair at each agents and has $\mathcal{O}(\frac{1}{1-\gamma})$ rounds of communication. Note that this order matches that in the statement of the lower bound, thereby establishing the optimality of communication complexity. The optimality of the sample complexity follows immediately from the lower bound in (Azar et al. 2013). Q4. Finally, assuming your lower and upper bounds match, can you explain whether the proposed Fed-DVR-Q is a parameter-free algorithm? That is, does it need any knowledge of the unknown parameters of the underlying MDP to achieve order-optimal sample complexity? A4. Our algorithm Fed-DVR-Q is parameter-free in the sense that it does not require any knowledge of the parameters of underlying MDP. Our algorithm, however, does have several parameters, whose values have been specified in Sec. 4.1.3. of the paper. We also use a hyperparameter $\eta \in (0,1)$ corresponding to the step size of the updates. As evident from the bounds in Theorem 2, it is preferable to have values of $\eta$ close to $1$. [LCCWC23]: G. Li, C. Cai, Y. Chen, Y. Wei, and Y. Chi. Is q-learning minimax optimal? a tight sample complexity analysis. Operations Research, 2023. --- Rebuttal Comment 1.1: Title: Response to Reviewer Comment: Thank you once again for taking out time to review our paper. We hope that our rebuttal satisfactorily addressed all your concerns. If you have any additional or follow-up questions based on the rebuttal, we would be happy to answer them!
Summary: This paper investigates the sample and communication complexities of federated Q-learning with intermittent central aggregation of the Q-value function. The authors demonstrate that to achieve any speedup in sample complexity through federated collaboration, the communication complexity must be at least $\Omega(1 /(1-\gamma ))$. Additionally, the paper introduces a novel federated Q-learning algorithm incorporating variance reduction and minibatching techniques, achieving order-optimal sample and communication complexities. Strengths: - This paper considers the important trade-off problem between sample complexity speedup and communication cost in federated reinforcement learning. It provides a complete characterization of this trade-off in federated Q-learning, including the communication cost in bits. - Not only do the authors provide a complete characterization of the sample-communication trade-off and design a novel federated Q-learning algorithm that achieves order-optimal sample and communication complexities, but they also provide insights and intuitions into how infrequent communication fails to speed up sample complexity, and how their algorithm balances this trade-off. Weaknesses: - Several papers report that a _one-shot_ average is sufficient to achieve linear speedups in federated reinforcement learning (FedRL) [1,2]. This seems to contrast starkly with the authors' claim that infrequent communication does not speed up sample complexity. Could the authors clarify this discrepancy and discuss how their work relates to these results? - The related work section on Distributed RL is generally comprehensive; however, it omits some recent works on heterogeneous FedRL [3,4]. - There is a lack of discussion on the technical difficulties and novelties of the proposed approach. The authors mentioned that Theorem 1 is inspired **by** the analysis of single-agent Q-learning [5] and Theorem 2 is based on the analysis of variance-reduced Q-learning [6]. The authors should elaborate on how their analysis differs from the single-agent case, what new challenges arise in the federated setting, and what novel techniques are employed to overcome these challenges. Additionally, can these techniques be generalized to other settings? - In Eq. (7), should $\widehat{\mathcal{T}}$ be $\mathcal{T}$? Otherwise it is not defined. Overall, I am satisfied with the paper, with the first point being my primary concern. I would be happy to raise my score if the authors provide satisfactory clarifications on the above points. ### References [1] Liu, R., & Olshevsky, A. (2023). Distributed TD (0) with almost no communication. _IEEE Control Systems Letters_, _7_, 2892-2897. [2] Tian, H., Paschalidis, I. C., & Olshevsky, A. (2024). One-Shot Averaging for Distributed TD (λ) Under Markov Sampling. _IEEE Control Systems Letters_. [3] Xie, Z., & Song, S. (2023). FedKL: Tackling data heterogeneity in federated reinforcement learning by penalizing KL divergence. _IEEE Journal on Selected Areas in Communications_, _41_(4), 1227-1242. [4] Zhang, C., Wang, H., Mitra, A., & Anderson, J. (2024). Finite-time analysis of on-policy heterogeneous federated reinforcement learning. In _International Conference on Learning Representations_. PMLR. [5] Li, G., Cai, C., Chen, Y., Wei, Y., & Chi, Y. (2024). Is Q-learning minimax optimal? a tight sample complexity analysis. _Operations Research_, _72_(1), 222-236. [6] Wainwright, M. J. (2019). Variance-reduced $ Q $-learning is minimax optimal. _arXiv preprint arXiv:1906.04697_. Technical Quality: 3 Clarity: 3 Questions for Authors: - While I understand that the authors focus on homogeneous i.i.d. data to highlight the main ideas, could the authors comment on the difficulty of generalizing their results to the asynchronous sampling setting? Specifically, the authors mention that the lower bound applies to the asynchronous setting. A similar remark on Theorem 2 would be helpful. Please see Section Weaknesses for other questions. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and open problems are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your constructive feedback. We appreciate the time and effort you spent on our paper and the helpful comments you provided. Please find our itemized responses to your questions below. - _Several papers report that a one-shot average is sufficient to achieve linear speedups in federated reinforcement learning (FedRL) [1,2]. This seems to contrast starkly with the authors' claim that infrequent communication does not speed up sample complexity. Could the authors clarify this discrepancy and discuss how their work relates to these results?_ Our result does not violate the results obtained in the existing studies on federated TD learning [1,2]. The key difference here is that the results in [1,2] consider TD learning to learn the value function directly instead of Q-function. If TD-learning is used to learn the value function directly, then the resultant algorithm aims to learn the fixed point of the operator $\mathcal{T}\_{TD} : \mathbb{R}^{|\mathcal{S}|} \to \mathbb{R}^{|\mathcal{S}|}$ given as $\mathcal{T}\_{TD}(V)(s) := r(s) + \sum_{s'} P(s,s')V(s')$, where $r$ and $P$ are reward and probability transition matrices respectively. Note that this is a **linear** function of $V$. On the other hand, our work focuses on learning the optimal Q-function via Q-learning. Specifically, we learn the fixed point of the operator $\mathcal{T}\_{QL} : \mathbb{R}^{|\mathcal{S}| |\mathcal{A}|} \to \mathbb{R}^{|\mathcal{S}| |\mathcal{A}|}$ given by $\mathcal{T}\_{QL}(Q)(s,a) := r(s,a) + \sum_{s'} P(s'|s,a) [\max\_{a'} Q(s',a')]$. Note that this function is a **non-linear** function of $Q$. The difference in the communication requirement stems from the fact that the linearity of the Bellman operator in terms of the value allows one-shot averaging to be sufficient to achieve optimal error rates. On the other hand, the non-linearity of Bellman operator with respect to the Q-function results in one-shot averaging to be no longer sufficient to achieve optimal error rates. If the operator whose fixed point is to be found is linear in the decision variable (e.g., the value function in TD learning) then the fixed point update only induces a variance term corresponding to the noise. However, if the operator is non-linear, then in addition to the variance term, we also obtain a *bias* term in the fixed point update. While the variance term can be controlled with one-shot averaging, more frequent communication is necessary to ensure that the bias term is small enough. A discussion regarding this difference between TD learning and Q-learning can also be found in [5] (from your comment) where the authors show that TD learning achieves the optimal sample complexity but Q-learning (without variance reduction) is necessarily sub-optimal in terms on dependence on $\gamma$. Thank you for highlighting this interesting point. We will add this discussion in the revised version of the paper. - _The related work section on Distributed RL is generally comprehensive; however, it omits some recent works on heterogeneous FedRL [3,4]._ Thank you for pointing out the additional papers. We will add them in the related work section. [3] adopts a policy optimization perspective, which is different from the Q-learning paradigm considered in this work. Moreover, the algorithm in [3] obtains a linear communication cost, which is worse than that obtained in our work. Similarly, [4] focuses on on-policy learning and incurs a communication cost that depends polynomially on the required error $\varepsilon$. --- Rebuttal 2: Title: Rebuttal by Authors contd. Comment: - _There is a lack of discussion on the technical difficulties and novelties of the proposed approach. The authors mentioned that Theorem 1 is inspired by the analysis of single-agent Q-learning [5] and Theorem 2 is based on the analysis of variance-reduced Q-learning [6]. The authors should elaborate on how their analysis differs from the single-agent case, what new challenges arise in the federated setting, and what novel techniques are employed to overcome these challenges. Additionally, can these techniques be generalized to other settings?_ One of the challenges in federated setting over the results in [5] and [6] is the design of the communication schedule and its interplay with the convergence of the algorithm to the optimal Q-function. We mention that our analysis is inspired by [5] as we use the same hard instance used in that work and the authors in [5] also use bias-variance trade-off to establish sub-optimality of sample complexity of Q-learning. However, in terms of establishing our lower bound, none of the lemmas from the analysis of single agent Q-learning in [5] can be trivially adopted for the federated learning scenario considered in our work as behaviour of all agents affects that of the others. This requires us to establish all technical results from scratch. Moreover, the communication schedule directly affects the bias-variance trade-off in the federated setting which needs to be carefully analyzed and balanced. Establishing the impact of communication on this trade-off is central to establishing the lower bound in our work. In our analysis, we establish how the time interval between two communication rounds affects the bias and the variance terms. This allows us to show that infrequent communication results in a higher bias term preventing linear speed-up with the number of agents. This analyses and conclusions are completely novel compared to the single-agent analysis in [5] especially because there is no communication involved in single-agent setting and the focus of their analysis is on establishing the sample complexity. Furthermore, the interplay of communication and bias-variance trade-off can be used to conclude communication bounds for more general problems of distributed stochastic fixed point iteration, specifically with non-linear operators. For example, a similar analysis yields that distributed optimization of strongly convex functions using SGD requires a communication cost proportional to the condition number of the function. Thus, the proposed techniques in this work have implications beyond RL. A direct extension of the algorithm in [6] to the federated setting results in a sub-optimal sample and communication complexities similar to (Woo et al, 2023). The novelty in our work to show that using minibatching as opposed local updates helps manage the bias-variance trade-off (referred to the in lower bound) much better enabling us to achieve optimal sample and communication complexities. As mentioned earlier, this observation carries forward to other distributed stochastic fixed point iteration problems, thereby providing a template to design algorithms that operate at the optimal point in sample-communication complexity trade-off curve. Since our algorithm design is different from that in [6], we need to derive newer results to establish Theorem 2. Lastly, the impact of communication and quantization prevents us from directly adopting the results in [6] and requires a more careful and novel analysis to ensure the optimal convergence rates. Thank you for pointing this out. We will add a discussion based on these lines in the final version of this paper. - _In Eq. (7), $\hat{\mathcal{T}}$ should be $\mathcal{T}$? Otherwise it is not defined_. That is correct. Thank you pointing it out. We will fix the typo. --- Rebuttal 3: Title: Rebuttal by Authors contd. Comment: - _While I understand that the authors focus on homogeneous i.i.d. data to highlight the main ideas, could the authors comment on the difficulty of generalizing their results to the asynchronous sampling setting? Specifically, the authors mention that the lower bound applies to the asynchronous setting. A similar remark on Theorem 2 would be helpful._ It is reasonably straightforward to extend the results to the asynchronous sampling setting. At a high level, note that after a burn-in period depending on the mixing time of the behaviour policy, the state visitation distribution will be close to the true stationary distribution. From hereon, we can run the algorithm almost as is with a different choice of mini-batch sizes and recentering sample size parameters. Note that the only difference in the asynchronous setting as compared to the generative model will be that the number of samples for each state-action pair will depend on the behaviour policy. By an appropriate choice of mini-batch sizes and recentering sample sizes, we can ensure that the error still decreases by a factor of 2 every epoch. Consequently, similar conclusions on the sample and communication complexity will also hold for the asynchronous setting. We would also like to point out that the sample complexity will be inversely proportional to the minimum *average* state-action visitation probability, similar to (Woo et al, 2023). As shown in that work, that is the best one can hope for and does not require each agent to cover all state-action pairs. Thank you for you helpful question. We will also add a discussion on this in the final paper. --- Rebuttal Comment 3.1: Title: Response to Reviewer Comment: Thank you once again for taking out time to review our paper. We hope that our rebuttal satisfactorily addressed all your concerns. If you have any additional or follow-up questions based on the rebuttal, we would be happy to answer them! --- Rebuttal Comment 3.2: Comment: I thank the authors for the comprehensive rebuttal and am content with the clarifications, discussions, and promised revisions. I have raised my score and will support the acceptance of this work.
null
null
Rebuttal 1: Rebuttal: Based on the suggestion by Reviewer YcCi, we have performed two empirical studies and included their results in the attached PDF. We refer the reader to the response to Reviewer YcCi for additional details about the experiments and a discussion of the results. Pdf: /pdf/16ef40bb8152ec5aa49e1a31eafb60bba1b907c3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Label Privacy in Split Learning for Large Models with Parameter-Efficient Training
Reject
Summary: The paper revisited the problem of label leakage in split learning in the context of fine-tuning large models with parameter-efficient training. Based on modern use cases, they proposed two privacy-preserving protections for gradients and activations during split learning. The proposed methods are evaluated on several large models including Llama2-7B, fine-tuned using LoRA and full model fine-tuning. Strengths: 1. The paper moved forward a step on the urgent need to privacy-preserving split learning over large models and fine-tuning with LoRA. 2. The writing is generally good despite some minor issues. The flow of ideas is clear. 3. The proposed method is evaluated over different pre-trained large models, conforming to current real-world use cases of LLMs. Weaknesses: 1. There exist several related works discussing the attacks and defense regarding the label leakage in split learning. The authors may need to compare the differences between the proposed methods and previous literature. The evaluation part lacks the comparison to existing privacy-preserving solutions over label leakage and some trivial solutions such as directly applying differential privacy, which is easy to implement. 2. In modern use cases of API fine-tuning, apart from the applications of classification, text generation with LLMs and image generation with multimodal models and diffusion are more common cases. And it is very critical to protect labels in these applications. For example, labels in text generation can contain answers to private questions in the private dataset. However, the leakage study and proposed privacy-preserving methods do not apply to these applications. 3. There are some minor writing issues that could be improved. For example, content introducing API fine-tuning and potential privacy concerns can be shortened in the introduction. The paragraph from line 53 to 58 can be reorganized so that it won't leave '[18]' for a whole line. Same thing for line 209. On line 28, write the full name of LoRA before using acronym. The authors are suggested to talk about split learning and no need to raise extra efforts for readers to understand what vertical federated learning is. [1] Wan, Xinwei, Jiankai Sun, Shengjie Wang, Lei Chen, Zhenzhe Zheng, Fan Wu, and Guihai Chen. "PSLF: Defending Against Label Leakage in Split Learning." In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 2492-2501. 2023. [2] Kariyappa, Sanjay, and Moinuddin K. Qureshi. "ExPLoit: Extracting private labels in split learning." In 2023 IEEE conference on secure and trustworthy machine learning (SaTML), pp. 165-175. IEEE, 2023. [3] Erdoğan, Ege, Alptekin Küpçü, and A. Ercüment Çiçek. "Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning." In Proceedings of the 21st Workshop on Privacy in the Electronic Society, pp. 115-124. 2022. [4] Xu, Hengyuan, Liyao Xiang, Hangyu Ye, Dixi Yao, Pengzhi Chu, and Baochun Li. "Permutation Equivariance of Transformers and Its Applications." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5987-5996. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From my understanding, split learning usually involves multiple clients and a server. But according to line 137, it is a client and several servers. Can the authors explain about this? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Discussed in the last section. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's insightful feedback. We address their concerns and questions below. >The evaluation part lacks the comparison to existing privacy-preserving solutions over label leakage and some trivial solutions such as directly applying differential privacy, which is easy to implement. We are grateful to the reviewer for bringing to our attention the overlooked baseline from paper [1]. We conducted detailed experiments using this framework to train DeBERTa and Flan-T5 on the SST2 and QNLI datasets. In all setups, PSLF[1] performed notably worse than DC and P$^3$EFT. See details in the attached PDF (Table 1). >There exist several related works discussing the attacks and defense regarding the label leakage in split learning. The authors may need to compare the differences between the proposed methods and previous literature. We agree that the paper would benefit from a further discussion of prior art in label privacy attacks and defenses. Unfortunately, many of the prior works in label privacy are not feasible to use in our practical setup. We explain this in more detail below. From the perspective of label privacy the Split Learning framework has two potential vulnerabilities: activations and gradients. In attempts to compromise label privacy, the community has proposed numerous algorithms based on gradient exploitation, such as Cosine attack[2] and Norm attack[2], as well as more sophisticated methods like the attacks from UnSplit[3] and ExPloit[4] mentioned by the reviewer. However, our privacy-preserving backpropagation method allows us to avoid sending actual gradients to the server. Instead, the server receives vectors that appear indistinguishable from random noise, and **only the client knows how to reconstruct the true gradients** from them. This makes any gradient-based attacks (e.g. [2,3,4]) impossible when using privacy-preserving backpropagation. Therefore, in our work, we validate only those attacks that exclusively use activations. For the same reason we do not validate defense methods such as Marvell[2], as it was initially designed to protect against gradient-based attacks. We would also like to point out that our privacy-preserving backpropagation is independent of the second part of our method protecting activations and can be used in combination with many other methods proposed in the literature - e.g., together with DC, PSLF, or Differential Privacy methods. To better explain this in the paper, we will extend Section 2.1 to discuss these attacks, and then discuss how exactly private backpropagation protects against these attacks in Section 3.3. >In modern use cases of API fine-tuning, apart from the applications of classification, text generation with LLMs and image generation with multimodal models and diffusion are more common cases. And it is very critical to protect labels in these applications. We agree that the tasks of ensuring label privacy in the setups enumerated by the reviewer are of significant importance. However, each of these different setups has its own challenges that require different approaches, and it's not feasible to fit them all into one paper. For instance, text generation uses the same tokens as inputs and labels and the information can leak in both directions. Hence, adapting our method to this task would require combining it with some input privacy method. We've chosen to focus on classification as a specific, but important setup. This focus allows us to thoroughly examine key aspects of privacy-preserving techniques in split learning. We believe this provides valuable insights that can serve as a foundation for future research. >From my understanding, split learning usually involves multiple clients and a server. But according to line 137, it is a client and several servers. Can the authors explain about this? This is indeed different from a traditional use of split learning or vertical federated learning. However, these are the two closest research areas known to us, and as such, we treat our setting as an extension of the standard split learning setup. If the reviewer would like to suggest an alternative taxonomy, we would gladly consider it. > Xu et al., "Permutation Equivariance of Transformers and Its Applications." We thank the reviewer for bringing paper [5] to our attention. Although the authors mainly focus on data and model privacy in applications of their method rather than label privacy, this paper is relevant to our work due to the shared goal of achieving private client-server learning using large transformer-based models. We will include a discussion of this paper in the related work section. >There are some minor writing issues that could be improved. We are grateful to the reviewer for their invaluable contribution to improving the quality of the paper. We will certainly address all the shortcomings they have identified in the final revision. [1] Wan, Xinwei, Jiankai Sun, Shengjie Wang, Lei Chen, Zhenzhe Zheng, Fan Wu, and Guihai Chen. "PSLF: Defending Against Label Leakage in Split Learning." In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023. [2] Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, and Chong Wang. Label leakage and protection in two-party split learning. ICLR, 2022. [3] Erdoğan, Ege, Alptekin Küpçü, and A. Ercüment Çiçek. "Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning." In Proceedings of the 21st Workshop on Privacy in the Electronic Society. 2022. [4] Kariyappa, Sanjay, and Moinuddin K. Qureshi. "ExPLoit: Extracting private labels in split learning." IEEE, 2023. [5] Xu, Hengyuan, Liyao Xiang, Hangyu Ye, Dixi Yao, Pengzhi Chu, and Baochun Li. "Permutation Equivariance of Transformers and Its Applications." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: Thank the authors for the rebuttal and complement experiments in tight time. My major concern about comparison with baselines is complemented with experiments. Another question is can the authors explain why differential privacy is not compared or in other words, is there any particular reason that we do not have the necessity to compare with differential privacy? --- Reply to Comment 1.1.1: Comment: >can the authors explain why differential privacy is not compared or in other words, is there any particular reason that we do not have the necessity to compare with differential privacy? Our decision not to include differential privacy (DP) comparisons into the original paper was based on two factors: 1. As noted in [1] (Section 2), differential privacy and its variants are not directly applicable metrics in the context of split learning. 2. We found a lack of existing literature applying label differential privacy **specifically to split learning**, which presented challenges in identifying appropriate differential privacy baselines for comparison. However, during the review process, you brought to our attention the paper [2]. In this work, the authors developed a framework based on Randomized Response that satisfies Label Differential Privacy. We believe this method could indeed serve as a viable DP baseline that we had not previously identified. If the reviewer has any other suggestions for alternative DP baselines, we would be open to incorporating experiments with these in the final version of the paper. [1] Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, and Chong Wang. Label leakage and protection in two-party split learning. ICLR 2022. [2] Wan, Xinwei, Jiankai Sun, Shengjie Wang, Lei Chen, Zhenzhe Zheng, Fan Wu, and Guihai Chen. "PSLF: Defending Against Label Leakage in Split Learning." In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 2492-2501. 2023.
Summary: This study addresses the privacy concerns associated with the fine-tuning of Large Language Models (LLMs), focusing on SplitNN. It explores how gradients and activations can leak data, potentially allowing attackers to reconstruct original data sets. In experiments, the proposed method reduces label leakage while maintaining minimal utility loss. Strengths: S1. The manuscript highlights significant privacy issues in LLM fine-tuning, specifically the potential for data leakage through gradients and activations in SplitNN. S2. Experimental results show that the proposed method significantly mitigates label leakage with minimal impact on utility. Weaknesses: W1. The claim that backpropagation is "conditionally linear" is not sufficiently rigorous. The manuscript suggests that $\text{backprop}(x, \theta, g_h+z)+\text{backprop}(x, \theta, g_h−z) = \text{backprop}(x, \theta, g_h)$ under the assumption that $\theta$ is constant. However, $\theta$ updates during each backpropagation, invalidating this assumption. Moreover, swapping the order of $\text{backprop}(x, \theta, g_h+z)$ and $\text{backprop}(x, \theta, g_h−z)$ could lead to different outcomes. Formal proof and a clearer statement of assumptions are needed to substantiate this claim. W2. Section 3.4 describes a method to protect activations that resembles secure multi-party computation [1], lacking novelty. Its effectiveness is also questionable when only one adapter is present. W3. The proposed protection mainly focuses on labels. In practice, data such as personal identifiers may pose a greater risk than labels. For example, knowing (a) Alice's salary (label) is included in the database is considered a more serious leakage than knowing (b) someone earns a salary of 3.2k. The manuscript should explore if the proposed method can also protect other sensitive features. **References** [1] Du, Wenliang, and Mikhail J. Atallah. "Secure multi-party computation problems and their applications: a review and open problems." Proceedings of the 2001 workshop on New security paradigms. 2001. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weakness Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: There elaboration on limitations is insufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and provide responses to their concerns hereafter. > The claim that backpropagation is "conditionally linear" is not sufficiently rigorous. The manuscript suggests that $\text{backprop}(x, \theta, g_h{+}z){ + }\text{backprop}(x, \theta, g_h{-}z) = \text{backprop}(x, \theta, g_h)$ under the assumption that $\theta$ is constant. However, $\theta$ updates during each backpropagation, invalidating this assumption. Moreover, swapping the order of $\text{backprop}(x, \theta, g_h{+}z)$ and $\text{backprop}(x, \theta, g_h {-} z)$ could lead to different outcomes. Formal proof and a clearer statement of assumptions are needed to substantiate this claim. We believe there has been a misunderstanding: the `backprop` API method **does not update $\theta$, it only computes the gradients** and returns them to the client (see line 146-148). Instead, the **client updates $\theta$,** which is possible because $\theta$ are parameter-efficient adapters. The detailed sequence of gradient computation and parameter updates is described in Algorithm 2 in Appendix B. Specifically, line 14 executes the `private_backprop` algorithm (during which the `backprop` API method is called multiple times). **After the gradients are computed** (and only then), the client updates the adapter set. If $\theta$ were updated during each `backprop` call , the equality would indeed be violated. For this reason, **we deliberately design `backprop` to be stateless**, i.e. it merely computes gradients with respect to adapter parameters based on given gradients with respect to activations, without changing either the adapter weights or the weights of the original model. >Section 3.4 describes a method to protect activations that resembles secure multi-party computation [1], lacking novelty. Our approach is indeed related to the area of multi-party computation. However, we respectfully disagree that this affects our novelty. Multi-party computation is a broad research area that contains general cryptographic protocols (e.g. secret sharing). However, applying these protocols naively to the task of LLM fine-tuning would be inefficient, as it introduces an overhead of several orders of magnitude. (e.g. [8], page 7). The core novelty of our work is an efficient protocol specifically for the task of LLM fine-tuning. This protocol relies on the specifics of backpropagation (Section 3.3) to protect gradients and modifies the training procedure to protect the activations (Section 3.4). While we build on some prior ideas from general multi-party computation, we insist that our approach is novel. >The proposed protection mainly focuses on labels. In practice, data such as personal identifiers may pose a greater risk than labels. For example, knowing (a) Alice's salary (label) is included in the database is considered a more serious leakage than knowing (b) someone earns a salary of 3.2k. The manuscript should explore if the proposed method can also protect other sensitive features. We agree that the privacy of personal identifiers is highly important. Fortunately, this problem is broadly studied in prior work [1,2,3]. In our work, we explore label privacy as one (but not the only) important component of private learning [4,5,6,7]. Since our approach is orthogonal to most feature privacy methods, it can be combined with these methods to protect both inputs and labels. > Its effectiveness is also questionable when only one adapter is present. Originally, we only considered training with multiple adapters for our experiments. To address your concern, we conducted additional experiments to better understand the effectiveness of our method with only $n=1$ adapter present. We report these experiments in Table 2 in the PDF and summarize them below . Surprisingly, we found that the $n=1$ setup demonstrates competitive performance on some (but not all) tasks. Still, training with a single adapter is less stable and, on average, inferior to the $n=2$ setup (see details in attached PDF). In future work, it is worth exploring single adapter setup further. We will include this evaluation with additional discussion in the final version. We did our best to address your concerns with clarifications and additional experiments. We hope that this alleviates the weaknesses raised in the review and respectfully ask the you to re-evaluate your score. If you have any further suggestions (through review edits), we would be glad to apply them as well. [1] Yansong Li, Zhixing Tan, and Yang Liu. Privacy-preserving prompt tuning for large language model services. ArXiv, abs/2305.06212, 2023. [2] C. Song and A. Raghunathan, “Information leakage in embedding models,” ACM SIGSAC 2020. [3] Xudong Pan, Mi Zhang, Shouling Ji, and Min Yang. Privacy risks of general-purpose language models. IEEE Symposium on Security and Privacy. [4] Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, and Chong Wang. Label leakage and protection in two-party split learning. ICLR 2022. [5] Jiankai Sun, Xin Yang, Yuanshun Yao, and Chong Wang. Label leakage and protection from forward embedding in vertical federated learning. arXiv:2203.01451, 2022. [6] Junlin Liu and Xinchen Lyu. Clustering label inference attack against practical split learning. arXiv:2203.05222, 2022. [7] Tianyuan Zou, Yang Liu, Yan Kang, Wenhan Liu, Yuanqin He, Zhihao Yi, Qiang Yang, and Ya-Qin Zhang. Defending batch-level label inference and replacement attacks in vertical federated learning. IEEE Transactions on Big Data, 2022. [8] Brian Knott, Shobha Venkataraman, Awni Hannun, Shubho Sengupta, Mark Ibrahim, and Laurens van der Maaten. Crypten: Secure multi-party computation meets machine learning. NeurIPS 2021. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I thank the authors for their thorough response. However, part of my concerns remain unaddressed. W1. I now understand that the statement is **correct** because parameters are not updated immediately after `backprop`. Despite this, I still recommend to - highlight this staleness in Section 3.3 - modify Equation 1 as it does not reflect a linearity property. I guess you may want to put something like $\text{backprop}(a, b, c1) + \text{backprop}(a, b, c2) = \text{backprop}(a, b, c1+c2)$ - developing theoretical guarantee for this mechanism, as the accumulation of Gaussian noise can lead to some probability bound that guide the selection of #steps $m$. (This could be a significant contribution) W2. I agree that _applying secure multi-party computation (MPC) to split learning for label protection is novel_. However, applying an existing MPC approach is different from designing a new MPC approach. This paper appears to claim designing a new MPC approach because there is no citation or discussion of the used MPC mechanism. However, this proposed new mechanism must be compared with other MPC algorithms to support its novelty. The authors should clearly stated whether they are designing a new MPC method or using an existing one, followed by a comparison to the literature or clear citations. > The core novelty of our work is an efficient protocol specifically for the task of LLM fine-tuning. A followup concern for this response: Is this method specifically designed for LLM fine-tuning? In the original manuscript, it seems apply to all neural networks optimized by gradient descent. **Two-party case**: > Originally, we only considered training with multiple adapters for our experiments Two-party case is not an exceptional case of multi-party split learning. Instead, the defense of the proposed approach is weakened as the number of parties decrease.The decreasing trend depends on the noise scale. A theoretically analysis is expected to demonstrate **what values $\mathcal{K}$ would likely invalidate the defense**. W3. I understand these are orthogonal topics, but my major concern is whether the focused scenario exists in practice. Can authors provide specific applications where labels urgently needs privacy protection? --- Reply to Comment 1.1.1: Comment: > Is this method specifically designed for LLM fine-tuning? In the original manuscript, it seems apply to all neural networks optimized by gradient descent. In the original manuscript, we note that our method leverages existing PEFT properties (e.g., **lines 11-12, 53-58, and 243**) and is applicable to any model trained with PEFT. Consequently, the scope is not limited to LLMs (for instance, PEFT is also applied to ViT[2] and diffusion models) or fine-tuning (e.g., see [3]). However, LLM fine-tuning remains the most prevalent application. Unfortunately, our method is not suitable for arbitrary neural networks. Specifically, the model must possess certain properties typical to PEFT — a large frozen model on the server side and a small number of trainable parameters. If you are interested in more detailed explanations, we would be happy to provide them. >**Two-party case**: We are unsure about your exact concern here, so we will try to respond to each point to the best of our understanding. >the defense of the proposed approach is weakened as the number of parties decrease We respectfully disagree with this statement. The experimental results with varying numbers of adapter sets ($n$) (see Table 2 in the attached PDF) **do not demonstrate this trend**. For instance, the results on SST2 with $n=4$ are inferior to those with $n=2$. >The decreasing trend depends on the noise scale. In Section 3.4, we employ noise solely in the coefficients $\xi$ in Eq. 5. Indeed, as the number of adapter sets ($n$) increases, the standard deviation of this noise increases as $\sqrt{n}$. However, the noise scale can also be directly amplified by increasing the standard deviation of the distribution from which $\xi$ is sampled. In our preliminary experiments, we attempted to vary the magnitude of the noise while keeping $n$ constant, but did not observe a clear correlation with the final results. If the reviewer is interested in this question, we can set up additional experiments to provide an answer. >A theoretically analysis is expected to demonstrate what values $\mathcal{K}$ would likely invalidate the defense. We did not use the notation $\mathcal{K}$ in our paper, but we assume this is a typo and the reviewer meant the number of adapter sets, $n$. We do not think that any particular value of $n$ will invalidate the defense. Even with $n=1$, the adversarial regularizer prevents the activations from pulling apart the feature representations for each label (unlike in Section 3.2). Our experiments conducted during the rebuttal phase with $n=1$ confirm this (for example, on SST2, the result with $n=1$ outperforms the DC baseline; see Table 2 for details). We would also like to emphasize that the rationale for using $n>1$ is described **in line 250**. To elaborate, we aim to create a situation where the activation distribution for each individual adapter is difficult to cluster or attack, yet the weighted sum of activations from Eq. 4 yields a distribution that is "simple" for subsequent classification. Achieving such a scenario for a pre-trained LLM with $n=1$ seems challenging, as we would simultaneously be training the same activation distribution to be both "simple for classification" and "difficult for clustering", whereas during conventional training, these properties typically correlate (see Figure 1). > Can authors provide specific applications where labels urgently needs privacy protection? One such example is provided in **Section 1, lines 48-52**. Additionally, there are several canonical examples where label privacy is crucial: 1. Advertising[4]. Consider an advertising platform A and an advertiser company B. Party A can record viewing history, while B possesses the visitor's conversion rate. B's labels remain private as purchases occur exclusively on their site or application. 2. Finance[5, 6]. An invoice agency A contributes invoice-related features, while a bank B provides credit data and SME labels. They collaboratively construct a risk model. [1] Bonawitz et al. Practical secure aggregation for privacy-preserving machine learning. ACM SIGSAC Conference on Computer and Communications Security 2017. [2] Dosovitskiy et al. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR 2021. [3] Lialin et al. Relora: High-rank training through low-rank updates. ICLR 2023. [4] Li et al. Label leakage and protection in two-party split learning. ICLR 2022. [5] Cheng et al. Federated learning for privacy-preserving AI. Communications of the ACM, 63(12):33–36, 2020. [6] Hu et al. Is vertical logistic regression privacy-preserving? a comprehensive privacy analysis and beyond. arXiv preprint arXiv:2207.09087, 2022. --- Rebuttal 2: Comment: >highlight this staleness in Section 3.3 >modify Equation 1 We thank the reviewer for their recommendations and will add these clarifications to the next revision of the paper. > developing theoretical guarantee Following the reviewer's recommendation, we provide theoretical analysis for privacy guarantees of the `private_backprop` algorithm. We use notations $B$ for `batch_size` and $d$ for `hidden_size`; $h_b, g_b, l_b$ correspond to activation, gradient and loss of the $b$-th batch element; $g_h$ represents a vector of concatenated gradients of all batch elements. We consider binary classification as a task with the minimum number of possible label combinations - $2^B$. We consider significantly stronger assumptions regarding the attacker's capabilities - namely, a white-box scenario. We assume the server knows the client-side model and, consequently, all possible $2^B$ vectors $g_h$ for different label sets. Thus, the server's task is to determine which of the $2^B$ label sets corresponds to the current batch based on the transmitted vectors. We investigate the minimum $m$ required to ensure that all $2^B$ sets remain equally probable from the attacking server's perspective in several possible setups: 1. Two non-interacting servers. In this scenario, it suffices to set $m=2$ and send one vector $\xi_i$ to each server $i$. From each server's perspective, all $2^B$ variants have equal probability because for any $\xi_i$ and a given $g_h$, there exists a vector $\eta$ such that $g_h$ belongs to the span of $\xi_i$ and $\eta$. 2. Single server. In this scenario, it is sufficient to set $m=B$. To show this, we note that for the $b$-th element of the batch holds $\partial l_b/\partial h_b = \partial l_b/\partial p_b \times \partial p_b / \partial h_b$, where $p_b\in \mathbb{R}$ is the head's prediction for activations $h_b$ - the probability of class 1. $\partial p_b/\partial h_b$ is a constant Jacobian of rank 1 and does not depend on the label value. Thus, both possible vectors $\partial l_b/\partial h_b$ lie in the Jacobian's image and belong to the same one-dimensional subspace. Therefore, it is sufficient for the client to send a basis vector of the corresponding one-dimensional subspace $\partial p_b/\partial h_b$ for each batch element $b$ (and zero vectors for the remaining batch elements) one by one. Knowing $\alpha_b$, the client can reconstruct the corresponding contribution of $b$-th element to the weight gradient $g_{\theta}$. The server, however, cannot determine which label generated the given gradient for each example, as both lie on the same line. 3. Single server, $m < B$. In this setup, the client is not able to protect all gradients of the batch. Indeed, for $B=3$ and $m=2$, the set of $2^3$ possible gradient combinations cannot be embedded in any $2$-dimensional plane that the client can construct from $2$ vectors (the linear span of these $2^3$ gradients is $3$-dimensional). At most, the client can fully protect $m-1$ labels, while the server will know the remaining $b - m + 1$ labels up to a flip of all labels. We want to emphasize again that the above results obtained under a white-box assumption, which is significantly stronger than our practical setup. The general case is considerably more challenging for the attacking side, and developing a possible attack or determining the theoretical limits of the attacker's capabilities is a complex task. However, we believe that the theoretical analysis presented above may be a good first step in this direction. >This paper appears to claim designing a new MPC approach because there is no citation or discussion of the used MPC mechanism. Thank you for your comprehensive comment. We now have a better understanding of the reviewer's initial concern and will endeavor to provide a detailed response below. However, we believe there may have been a misunderstanding. **We do not state** that our method for protecting activations, described in Section 3.4 and referenced by the reviewer in W2 of the official review, pertains to MPC. Concurrently, we acknowledge that our method could be considered as belonging to MPC in a broader sense, as the `private_backprop` algorithm shares common ideas with this field. We **discuss this similarity in Section 3.3, L217-222, and cite prior work [1]** that leverages MPC. We have also used the phrase "multi-party" in the paper in reference to our method (L11, 15, 38), but **we have not explicitly mentioned MPC**. We employed these terms to highlight the potential benefits of a multiple-server setup when using `private_backprop` with fine-tuning API, which we discuss in detail in L223-235. If the reviewer disagrees with the use of the phrase "multi-party split learning," we are open to considering an alternative taxonomy. We hope our explanation has helped to resolve the misunderstanding. If the reviewer has any remaining questions regarding this concern, we would be glad to address them.
Summary: This paper addresses privacy leakage during API-based Parameter Efficient Fine-Tuning (PEFT). Their designed P3EFT is a multi-party split learning algorithm that leverages PEFT adjustments to uphold privacy with minimal performance overhead. Their method proves competitive in both multi-party and two-party setups while achieving higher accuracy. Strengths: Researching API-based fine-tuning for large models is an intriguing topic, especially considering that many clients face challenges loading such large models due to size and computational constraints. In this scenario, privacy concerns regarding client data become paramount. This paper aims to mitigate potential privacy leakage by obfuscating gradients and parameters communicated during transmissions. Weaknesses: Their approach shows limited privacy improvement compared to the scenario Without LoRAs, as indicated in Tables 1, 2, and 3, thereby restricting the overall benefits. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the justification for using n=2 in line 332 for all the datasets? 2. How does the server benefit from this Split learning? How does the inference work after fine-tuning? 3. Address weaknesses. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper addresses limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and answer their questions below. >Their approach shows limited privacy improvement compared to the scenario Without LoRAs, as indicated in Tables 1, 2, and 3, thereby restricting the overall benefits. The baseline 'Without LoRAs' represents the best-case scenario for privacy, since it does not transmit anything but input data. More advanced algorithms such as DC and P$^3$EFT (ours) cannot improve privacy compared to this scenario. However, training Without LoRAs means that the server-side model remains unchanged and cannot adapt to the task, leading to substantially worse accuracy. As a result, our method offers significantly higher accuracy (avg. 24.8% higher for DeBERTa) Ultimately, it comes down to the client's priorities - if privacy is paramount, they can opt for not training LoRAs, but if quality is more important and they're willing to accept a slight decrease in privacy, they can take advantage of the opportunity our method provides. >What is the justification for using n=2 in line 332 for all the datasets? Our preliminary experiments indicated that increasing $n$ barely improves the privacy of activations while introducing additional computational overhead. However, we agree that this matter should be explored further. To that end, we conduct additional experiments with various values of $n$ and include the results in the attached PDF (see Table 2). We thank the reviewer for this question and will add an extended ablation study on this hyperparameter in the final version of the paper. >How does the server benefit from this Split learning? We briefly touch upon this in Section 1 (line 25-31), but we agree that our paper would benefit from a more detailed discussion. To recall, our setup is based on the popular fine-tuning APIs, such as [1, 2, 3, 4, 5]. These APIs can be broadly split in two categories: * APIs for fine-tuning proprietary models (e.g. by OpenAI [1]) * APIs for fine-tuning large open-source models (e.g. Hugging Face [2] or Replicate [5]) In both cases, the API provider receives some form of compensation (i.e. money) for each use of their API. In turn, the clients benefit from the provider’s superior model (former type) or infrastructure (latter). By using privacy-preserving fine-tuning, the provider can reach more privacy-minded clients, which can increase their revenue, popularity, or similar. > How does the inference work after fine-tuning? The most obvious approach is to perform inference using the API's `forward` method, just as during training. Alternatively, several prior works [6, 7, 8] propose specialized algorithms for privacy-preserving inference that could be adapted to inference LLMs fine-tuned with our approach. We believe that this question is highly relevant to our work and plan on discussing it further in a separate subsection. [1] https://platform.openai.com/docs/guides/fine-tuning [2] https://huggingface.co/autotrain [3] https://octo.ai/docs/media-gen-solution/fine-tuning-stable-diffusion/fine-tuning-stable-diffusion [4] https://dreamboothapi.ai/ [5] https://replicate.com/docs/guides/fine-tune-a-language-model [6] Edward Chou, Josh Beal, Daniel Levy, Serena Yeung, Albert Haque, and Li Fei-Fei. Faster cryptonets: leveraging sparsity for real-world encrypted inference. arXiv preprint arXiv:1811.09953, 2018. [7] Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P Xing, and Hao Zhang. Mpcformer: fast, performant and private transformer inference with mpc. arXiv preprint arXiv:2211.01452, 2022. [8] Jinglong Luo, Yehong Zhang, Jiaqi Zhang, Xin Mu, Hui Wang, Yue Yu, and Zenglin Xu. Secformer: Towards fast and accurate privacy-preserving inference for large language models. arXiv preprint arXiv:2401.00793, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful rebuttal. I appreciate the clarifications provided regarding the privacy trade-offs, particularly how the 'Without LoRAs' scenario represents the best-case privacy option and how your method balances this with significantly improved accuracy. The additional experiments on n=2 and your plan to include an ablation study in the final version address my concerns well. Your detailed explanation of the server’s benefits from split learning and the inference process post-fine-tuning also adds clarity. However, I will maintain my original score as the paper offers a valuable contribution, albeit with some trade-offs that merit further exploration.
Summary: This paper proposes an algorithm to preserve the label privacy while achieve good accuracy in the split learning regime. The algorithm is used in parameter-efficient fine-tuning and empirically tested on some language models. Strengths: The paper clearly presents its motivation and contribution. The modification on the back-propagation is reasonable and empirically effective across three models and different attacks that are tested Weaknesses: The main concern is the scalability of this method. For one iteration, the number of backpropagation is m (at least 2), which is too slow even for PEFT. The computation cost of PEFT is 2/3 of full training so if m=2, the total cost of this method is 4/3 of full training. Looking at the code, there are 7 hyperparameters introduced by this method, which may be hard to use in practice. I would suggest the authors fix some hyperparamters that the algorithm is robust to, to reduce the number of tunable hyperparameters. Also the experiment results on SST2 show a severe leakage around 10% compared to without LoRA (even though this is relatively weaker than other methods). Technical Quality: 3 Clarity: 3 Questions for Authors: Have the authors considered other PEFT like linear probing and BiTFiT? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: As in its current presentation, the method is limited to language models, split learning (two parties), and label privacy. The empirical evidence is limited to text classification (specifically, this method does not apply to natural language generation where LLAMA is originally trained for) and LoRA. Each limitation can be relaxed, e.g. extending to vision models, data reconstruction, additional PEFT, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and address their concerns and questions below. >The main concern is the scalability of this method. For one iteration, the number of backpropagation is m (at least 2), which is too slow even for PEFT. The computation cost of PEFT is 2/3 of full training so if m=2, the total cost of this method is 4/3 of full training. Our approach does indeed introduce computational overhead, similarly to other privacy-preserving techniques. You are correct about per-iteration slowdown, but we would like to emphasize that model fine-tuning typically requires much fewer iterations than full training. For instance, LoRA adapters are known to fine-tune LLMs with 10-175B parameters in a matter of hours or days, using much less hardware than full training[1, 2]. Thus, we believe that our approach can be fast enough for some practical applications, even after accounting for the overhead. Other privacy-preserving fine-tuning methods also introduce considerable overhead. For instance, federated learning methods based on Homomorphic Encryption or Multi-Party Secure Computation achieve fully private training but incur overhead of several orders of magnitude (e.g. [5], page 7). Other existing methods, such as Distance Correlation [3] and methods based on differential privacy [4] offer less overhead at the cost of leaking some private information. Our approach belongs to the latter category, but offers a more favorable privacy-accuracy trade-off on the LLM fine-tuning tasks that we experiment with. That said, we agree that computational overhead is important and will discuss it more thoroughly in the final version of the paper. >Looking at the code, there are 7 hyperparameters introduced by this method, which may be hard to use in practice. I would suggest the authors fix some hyperparameters that the algorithm is robust to, to reduce the number of tunable hyperparameters. We believe there was a misunderstanding about our hyperparameters. We mention 7 hyperparameters only in the readme file from our supplementary code. However, **the actual number of hyperparameters is much smaller** – these 7 parameters include the options for running baseline algorithms. More specifically: * `coefs_method_name` is only needed for switching between baseline and our method. * `activation_lr_rw`, `shift_lr_rw`, `activation_dc_rw`, `shift_dc_rw` switch between our algorithm and the Distance Correlation (DC) baseline. * The `n_of_loras` **is** related to our algorithm, but we always use $n{=}2$ in all experiments (see line 332), so it is de facto constant. This leaves **only two hyperparameters:** the regularization parameter $\alpha$ described in Appendix A (referenced line 325), and the magnitude of the noise used in Eq. (5). To further simplify hyperparameter tuning, we will add guidelines for setting these parameters in the final revision. >Also the experiment results on SST2 show a severe leakage around 10% compared to without LoRA (even though this is relatively weaker than other methods). This is a valid observation, as private fine-tuning still remains a challenging task. In our submission, we compare algorithms in terms of trade-off between privacy and final quality. While fine-tuning without LoRAs does not leak privacy, it significantly compromises the accuracy of the main task. For example, not using LoRAs on DeBERTa loses 24.8% accuracy on average between 3 tasks. >Have the authors considered other PEFT like linear probing and BiTFiT? In principle, our approach can indeed be applied with other PEFT algorithms such as BiTFiT, since we only modify how the model is fine-tuned as a whole, without relying on any specific PEFT method. In our experiments, we chose LoRA as it is the most popular type of adapter used for NLP models. In turn, linear probing is identical to our baseline “Without LoRAs”. We thank the reviewer for this reference and will use this name in the updated paper. [1] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. [2] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024. [3] Jiankai Sun, Xin Yang, Yuanshun Yao, and Chong Wang. Label leakage and protection from forward embedding in vertical federated learning. arXiv preprint arXiv:2203.01451, 2022. [4] Xinwei Wan, Jiankai Sun, Shengjie Wang, Lei Chen, Zhenzhe Zheng, Fan Wu, and Guihai Chen. Pslf: Defending against label leakage in split learning. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 2492–2501, 2023. [5] Brian Knott, Shobha Venkataraman, Awni Hannun, Shubho Sengupta, Mark Ibrahim, and Laurens van der Maaten. Crypten: Secure multi-party computation meets machine learning. NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful rebuttal. It addresses most of my concerns. I have read it carefully and decided to maintain my score.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their detailed feedback. We are pleased that the reviewers **1WCr** and **S8Y2** concur with our assessment regarding the critical importance of private API-based fine-tuning of large models in the contemporary landscape. We also appreciate **Fveg**'s and **S8Y2**'s acknowledgment of the paper's clear and coherent presentation. Additionally, we are glad that the reviewers **Fveg**, **T8zi** and **S8Y2** noted our empirical results. The reviewers provided several valuable suggestions for improving the manuscript. Among others: * Reviewer **S8Y2** highlighted PSLF[1] as a relevant baseline. In response, we conducted comprehensive experiments with this framework on the SST2 and QNLI datasets using DeBERTa and Flan-T5 models. Our findings, detailed in Table 1 of the attached PDF, indicate that PSLF[1] was consistently outperformed by Distance Correlation (DC) and P$^3$EFT across all four experimental setups. * Reviewer **1WCr** inquired about the impact of the number of adapters on privacy and accuracy. To address this, we conducted experiments using the DeBERTa model on SST2 and QNLI datasets with $n=3$ and $n=4$ adapter sets. The results, presented in Table 2 of the PDF, generally demonstrate that the number of adapters has minimal influence on the resulting privacy and accuracy. * Reviewer **T8zi** expressed interest in the efficacy of our setup when utilizing a single set of adapters. We examined this scenario using DeBERTa on SST2 and QNLI datasets. Despite slightly reduced stability concerning the $\alpha$ (reguralization weight hyperparameter), this setup proved highly competitive, which opens promising direction for further research. We have incorporated these results alongside previous experiments in Table 2. We thank the reviewers for their insightful feedback and address the remaining questions and comments in our individual responses. We have also taken note of your observations and will address all the shortcomings in the final version of the paper. [1] Wan, Xinwei, Jiankai Sun, Shengjie Wang, Lei Chen, Zhenzhe Zheng, Fan Wu, and Guihai Chen. "PSLF: Defending Against Label Leakage in Split Learning." In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 2492-2501. 2023. Pdf: /pdf/5f8965380f6797f6e4f4b07bfc4eebf1251db513.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis
Accept (poster)
Summary: This paper proposes LE3D, an HDR 3D Gaussian Splatting method with Cone Scatter Initialization, Color MLP, and depth distortion. Specifically, this paper introduces the Cone Scatter Initialization to enrich the estimation of SfM. The Color MLP aims to represent the RAW linear color space. The goal of depth distortion and near-far regularizations is to improve the accuracy of scene structure. Experimental results show that the proposed LE3D achieves promising results. Strengths: 1. The motivation of this paper is good. Nighttime 3DGS and HDR 3DGS are very important. 2. The proposed method includes comprehensive experiments. 3. The experimental settings are easy to follow. 4. The video demo in the supplementary material is good. Weaknesses: 1. The authors mentioned, “the SfM estimations based on nighttime images are often inaccurate.” I agree with this point. However, from my understanding, the initial point clouds can be wrong and can be further optimized by the 3DGS training stage. Therefore, I am wondering if the cone scatter initialization is really important. Could the authors provide ablation studies using the RawNeRF dataset? 2. 3DGS highly relies on the initial point clouds. One of the examples in Fig. 1 (a) shows that the input image can be very dark. I am wondering if the authors can get the initial SfM estimations from such inputs. From my understanding, COLMAP cannot work well on these dark inputs. If so, how do the authors get the initial SfM estimations? 3. The authors mentioned, “the SH does not adequately represent the HDR color information of RAW images due to its limited representation capacity.” Why? Could the authors provide any solid evidence? 4. The depth distortion (Eqn. 8) is quite similar to Eqn. 15 in Mip-NeRF 360. How does the proposed depth distortion differ from the depth distortion of Mip-NeRF 360? Could the authors conduct experiments to compare the two depth distortions? 5. The authors seem to overstate their methods. Compared to RawNeRF, LE3D can reduce training time to 1%. However, the fast training speed is not actually due to the proposed components, but rather from 3DGS itself. If we compare it to the original 3DGS, the training speed of LE3D increased from 1.05 to 1.53, which is an improvement of approximately 45.71%. Since this is the first HDR 3DGS paper, I do not want to be too strict. But I suggest that the authors do not overstate the advantages in terms of speed. 6. The quantitative results of the ablation studies should be shown. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing the motivation, effectiveness, and thoroughness of our experiments on LE3D. We also appreciate your valuable comments! Below, we address the specific points. 1. **Does CSI really matter?** We have done our ablation studies on CSI in our main paper, please refer to Fig. 4 (b) in our main paper and Fig. 1 in the rebuttal PDF. Compared to LE3D, the removal of CSI leads to a lack of distant scene details and the splitting of a large number of gaussians at incorrect depth levels to fit the distant scene, thereby resulting in failure in structural optimization. Additionally, there is a severe decline in visual quality (please zoom in for the best view). This is because the sparse point cloud cannot reconstruct distant points effectively, whereas CSI can effectively increase the initial point cloud in distant areas, achieving the goal of effective distant scene reconstruction. 2. **Initial SfM on Dark Scenes**: We follow the exact settings of RawNeRF to obtain the initial SfM using COLMAP (except we use the PINHOLE camera model instead of OPENCV). In the RawNeRF dataset, COLMAP works well on calibrating camera poses but not very well on generating point clouds. The lack of point cloud initialization is also one motivation for our purpose of CSI initialization, which enriches the point cloud and extends its depth range. If the scene is too dark in other data, leading to poor COLMAP calibration, we brighten the scene using the DNG data to obtain corresponding JPG images, and then perform COLMAP calibration. 3. **Why SH is less expressive than MLP?** As this is a common problem, please refer to the third section "What limits the final representation capability of SH on linear color space" in the global rebuttal for details. 4. **Depth Distortion**: Our depth distortion map is an approximate implementation of equation (14) from Mip-NeRF 360 as we mentioned in the main paper L197-L198. Both of our regularizations are used to improve structure reconstruction. Please refer to Table 2 in the supplementary for a quantitative comparison of the effects of both $ R_{dist} $ and $ R_{nf} $, and Fig. 6, 7 in the supplementary for qualitative comparisons. 5. **Overstated Performance**: Thanks for pointing out this. We apologize and will adjust our wording accordingly in our next version. 6. **Quantitative Results of Ablation Studies**: Please refer to Table 2 in our supplementary. We hope our response addresses your concerns. You can find a reiteration of our motivation, contribution, and potential application in the first section of the global rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I will maintain my score. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful for your comprehensive review and the constructive feedback provided. Your acknowledgment of our paper's strengths and the thoughtful questions raised regarding its weaknesses provide valuable guidance for our revisions. We will address the concerns with additional clarity and precision in our updated version.
Summary: The paper LE3D proposes training a 3DGS representation with raw images, instead of preprocessed LDR images. This allows more accurate scene recovery in low-light environments and unlocks applications such as HDR rendering or refocusing. While there has been prior work on training neural scene representations with raw images (notably RawNeRF), this work specifically proposes to use a 3DGS representation which allows real-time rendering. To facilitate training a 3DGS representation with raw images, they make several contributions: 1) Cone Scattering Initialization - a method to "densify" the Gaussian initialization to overcome inaccuracies in SFM 2) Replacing the SH in 3DGS with a colour MLP 3) Introducing space carving losses for better geometry recovery (for downstream tasks) Strengths: Originality & Significance: I like that the paper brings several of the contributions in the NeRF literature to 3DGS, specifically I like the problem being attempted in the paper, I think HDR rendering and training with raw images is an important problem and it is timely to attempt this problem using a 3DGS representation. Losses such as the distortion loss from MipNeRF360 are widely used in the NeRF literature and I think it's valuable to have an implementation of them in the 3DGS context. Not a significant research contribution, but I also really liked the interactive viewer in the supplement, I think it would be fun to play around with the demo and see the weaknesses of the method. Quality: I think the authors are fairly open about the limitations, for example mentioning NeRF-based methods obtain superior sRGB metrics. Clarity: Paper is well organized. Weaknesses: Originality: I think although useful, the contribution of the work isn't huge, from Table 1, we can see RawNeRF + GS performs quite well already. I also believe the Related Works section is a bit sparse, there has been a lot of work on low-light image enhancement with NeRF, which I think is relevant (even though it's not about HDR rendering), see a couple below: 1) Lighting up NeRF via Unsupervised Decomposition and Enhancement, ICCV 2023 - method for training NeRFs with sRGB images taken in the dark 2) Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption, AAAI 2024 - method for training NeRFs with sRGB images taken in over/under-exposed scenarios Quality: This is my biggest gripe with the work, some of the contributions are included without significant reasoning. Most importantly: 1) The authors introduce the CSI and say it helps due to the misalignments in the SFM estimation. However, I don't think this claim is sufficiently well-proven. Perhaps a denser initialization helps even in cases with perfect cameras? 2) The authors do not justify including the colour MLP, apart from saying it performs better than just using the SH (due to the linear scaling). I think this point is specifically important since I think the MLP is making the method 1.7 times slower in rendering (looking at Table 1). For 1) I am sympathetic to the authors that this might be hard to justify, but I think 2) should be addressed (see also questions). Clarity: There are quite a few typos/grammatical errors, I list just a few below, but I would implore the authors to use a spell check throughout the text: L38: to the 3D world L170: fed L173: set Technical Quality: 2 Clarity: 2 Questions for Authors: 1) Have the authors tried just increasing the rate of the Gaussian splitting instead of CSI? Have they tried anything else apart from CSI for the gaussian density problem? 2) Have the authors tried other strategies apart from a colour MLP? Perhaps the SH are also somehow introducing some bounds/weird gradient updates. Here's a thought: in NeRF if you bound your radiance when it shouldn't be, your density would also become heavily affected by the radiance (as it seems is happening in your case without the colour MLP), perhaps it could be something similar going on? How different is the per channel scaling behaving with/without the colour MLP? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our motivation and the structure of our paper, as well as your valuable comments! Below, we address the specific points. 1. **About the related work**: we will add and discuss them in our next version. We will add more discussion for sRGB images (both w/wo multi-exposure) based on novel view synthesis techniques with your referred papers. 2. **Does RawGS (RawNeRF + 3DGS) perform well enough?** We do not believe that RawGS (RawNeRF + 3DGS) performs well enough. The reasons are as follows: 1) Colour reconstruction failure & more floater remaining: As shown in Fig. 1 (left), Fig. 3, and Fig. 13-15 in our main paper and supplementary, these results indicate that RawGS performs worse than LE3D in terms of colour and floaters. 2) Structure reconstruction failure: As shown in Fig. 5 (c, e) and Fig. 13-15 in our main paper and supplementary, it is evident that RawGS fails to reconstruct the structure (depth map). This is disastrous for downstream tasks such as refocusing, as shown in Fig. 5 (b, d) in our main paper and Fig. 9 in supplementary. Therefore, we do not believe RawGS performs well enough, which is also the motivation for LE3D: to make 3DGS available on real-time HDR view rendering from noisy RAW images, as well as downstream editing tasks. 3. **Regarding CSI and dense point cloud initialization**: We have previously conducted experiments using dense point clouds (this is also what we tried before coming up with the idea of CSI), which indeed partially solved the problem of failing to reconstruct distant scenes. However, it led to the following issues: 1) Longer time cost: The time required to reconstruct dense point clouds is four to five times that of sparse point clouds (from 6min 23sec to 28min 53 sec on scene 'bikes'), thereby increasing the overall reconstruction time. 2) Redundant gaussians when converged: An excessive number of points leads some gaussians to fit noise, which negatively increases the count of gaussians. For instance, in the 'bikes' scene, the number of points increased from 4.1e5 to 5.4e5, which is approximately a 30% increase, resulting in longer rendering time and more storge cost. So a balance between sparse and dense point cloud initialization is important, and then here comes our CSI. In fact, it is hard to find out the difference between the visual result of dense initialization and CSI initialization. We can provide proof of this. 4. **Regarding the speed issue between MLP and SH**: We believe this balance is worthwhile because both structure and colour are important for downstream tasks, and the speed of LE3D remains real-time. More discussion on MLP and SH can be found in the third section "What limits the final representation capability of SH on linear colour space" in the global rebuttal. 5. **Regarding increasing the splitting threshold and the idea before CSI**: We experimented with increasing the splitting threshold as shown in Fig. 1 (c) in the rebuttal PDF, but it did not effectively solve the issue of distant scenes and actually increased reconstruction noise. The reason it cannot effectively address distant scenes is that our dataset is forward-facing, and gaussians tend to move parallel to the camera plane rather than perpendicular to it. Therefore, if there are not enough gaussians for distant scenes during initialization, additional gaussians will not appear spontaneously. We observed this phenomenon based on previous experiments with dense point clouds. Dense point clouds can reconstruct points in the distance, but this also impacts training time and the number of gaussian. Our CSI effectively balances the number and distance of points. 6. **Regarding colour representation besides MLP and the co-optimization of colour and structure**: Currently our choice of MLP proves sufficiently effective with its learning ability, as shown in Fig. 1 (d) and Table 2 in the rebuttal PDF. However, we agree with the reviewer that trying other strategies is interesting future work. In vanilla 3DGS (with SH), there is indeed a phenomenon where colour and structure do not optimize well together. As we mentioned in the ablation study (Fig. 4 (e) in main paper), gaussians with SH in the early stages lead to colour optimization failure, which in turn causes structure optimization failure before 15,000 iterations (during which gaussian primarily optimize structure through split/clone). This, in part, limits the final colour performance of SH. Please refer to the third section "What limits the final representation capability of SH on linear color space" in the global rebuttal for details. Besides, the per channel scaling behaving with/without the colour MLP can also be found in Fig. 1 (d) in the rebuttal PDF. We hope our response addresses your concerns. You can find a reiteration of our motivation, contribution, and potential application in the first section of the global rebuttal. --- Rebuttal 2: Comment: I thank the reviewers very much for their response. **2)** Thank you for stressing the refocusing results, I think these indeed show that RawGS just can't perform some tasks, despite not performing badly on NVS metrics. **3, 5)** I think the author's response and Fig 1. in the rebuttal adequately show the importance of CSI. I would implore the authors to add this discussion to the paper/supplement. **4, 6)** I think the authors misunderstood me slightly, I'm not saying the MLP is not necessary, I guess I just wanted to know what other alternatives the authors have tried. I think this is important since as I mentioned the MLP brings about a 1.7 times reduction in rendering speed, which is the main point of the paper. I appreciate the figures the authors have included in the general rebuttal about the statistics of the SH-derived colors. I think they beg the question, what if an activation was used on top of the radiance prediction from the SH? RawNeRF uses an exponential activation to increase the dynamic range. I think this might serve a similar purpose here, increasing the dynamic range and also making the outputs non-negative? Does that make sense? I think I'm replying quite late, and the authors might not have enough time to address this, apologies for this. Although I would love to see this experiment before the end of the discussion phase, I understand this might not be enough time and would then love to see it for the camera ready. --- Rebuttal Comment 2.1: Comment: Thank you so much for acknowledging our response and for the helpful suggestions regarding the SH with exponential activation experiments. We'd like to clarify a few points: 1. **Apologies for the confusion on Question 2**: We're sorry for any misunderstanding earlier. We believe that the MLP greatly enhances performance, and the benefits outweigh the extra computational costs. The improvements in color and structure are crucial for LE3D's flexibility in downstream tasks and for better visual results. Importantly, the MLP version of LE3D still operates in real-time, so we think switching from SH to MLP is a good trade-off. 2. **SH with Exponential Activation Additional Experiment**: We appreciate your insightful recommendation. Early in our project, we did try combining SH with exponential activation after noticing that vanilla SH produced negative values. While this approach did help with some of the issues SH has with HDR scene reconstruction, our experiments showed that it didn’t fully address SH's inherent limitations. For example, in the 'windowlegovary' scenario, the SH+EXP method showed a maximum value above 1e3 on one channel while staying below 10 on others, highlighting SH's expressive limitations. Additionally, we observed color issues similar to those in the 'yuccatest' scenario, as shown in Fig. 7 of our supplementary material. We’re planning to include these discussions, additional figures, and a more detailed response (regarding CSI and the MLP/SH comparison) in our next version. We really appreciate your feedback and acknowledgment. Please feel free to reach out with any more questions or requests for additional experiments. If our clarifications have addressed your concerns, we’d be very grateful if you could consider adjusting your score to reflect the improvements and the efforts we've put into refining our paper.
Summary: LE3D is a novel method for real-time novel view synthesis, HDR rendering, refocusing, and tone-mapping changes from RAW images, especially for nighttime scenes. It addresses the limitations of previous volumetric rendering methods by introducing Cone Scatter Initialization to improve SfM estimatied pointclouds, replacing SH with a Color MLP to represent in the RAW linear color space, and introducing depth distortion and near-far regularizations to enhance scene structure accuracy. LE3D significantly reduces training time and increases rendering speed compared to previous methods. The model is tested on the benchmark dataset of the existing RawNeRF paper. Strengths: 1) The paper is well written and has a clear structure that aligns to the contribions which makes it easy to read and understand. 2) The methods part is technical sound and details are described. Weaknesses: 1) Instead of using the Spherical Harmonics the authors propose a view-dependence MLP and explain the change that the MLP can better represent the the linear RGB space, however it is unclear to me why the SH are less expressive than a tiny MLP in the linear RGB space. Can the authors provide more insights on this. 2) Insufficient quantitative investigations of the new view-dependence MLP. I'd expect more evaluation of this part, e.g. the varying size of the MLP and different parameters of the SH model. Moreover it would be good to the effect of switching to MLPs on the render time. 3) While the cone scatter initialization leads better rendering, it would be good to discuss how the additional sampling point influence the total number of gaussians. This number becomes relevant for the fps measurements and might reduce speed. Technical Quality: 3 Clarity: 3 Questions for Authors: Please discuss the points mentioned in the weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Sufficiently discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our writing and your valuable comments! Below, we address the specific points. 1. **Request for more insights on MLP and SH**: We have analyzed the instability factors of SH during training and provided more statistical data to demonstrate its limited representation capability under high contrast conditions. Please refer to the third section "What limits the final representation capability of SH on linear color space" in the global rebuttal for details. 2. **Request for more ablation studies on MLP and SH**: We have conducted experiments on the size of MLP and the degree of SH. Please see Table 2 in the rebuttal PDF. The experiment demonstrates that modifying the parameters of SH does not address the inherent issues, which underperforms MLP across all metrics. Moreover, adjusting the parameters of the MLP has minimal impact, given the small disparity in the metrics. Consequently, transitioning to an MLP is entirely justified. 3. **Detailed results on gaussian numbers for w/wo CSI**: In some scenes (especially bikes and windowlegovary), the sparse point cloud fails to reconstruct distant points accurately. During training, gaussians tend not to move perpendicular to the camera plane but rather parallel to it. This results in many foreground gaussians attempting to fit the background from different perspectives, as shown in Fig. 4 (b) in our main paper, leading to more foreground split/clone operations and thereby increasing the number of gaussians and leading to worse results in structural reconstruction. CSI is used to add sparse points for distant scenes, and due to the optimization capabilities of 3DGS, in all scenarios, CSI not only did not increase the number of points, but actually reduced them. Besides, CSI is particularly effective in the outdoor scene (-24.86% for bikes and -13.10% for windowlegovary). Details can be found in Table 1 in the rebuttal PDF. We hope our response addresses your concerns. Here, I would like to reiterate the potential impact of LE3D: real-time rendering for HDR view synthesis is important because it can support more downstream editing tasks, allowing novel view synthesis technology to be applied to VR/AR or post-production in film and television (reframing and postprocessing). As demonstrated in our demo video, our current progress already supports various subsequent processing techniques in near real-time. These techniques bring LE3D closer to practical applications. Details for more about our motivation can be found in the first section of the global rebuttal. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you , I appreciate your efforts in answering my concerns. The explanation regarding the SH and MLP representation makes sense to me and it supports the respective claim in the paper. Further, I want to thank for the extensive ablation study on the appearance representation and the CSI. After reading all reviews and the rebuttal, I increase my rating to weak accept and encourage the authors to add the additional ablation studies to the paper/supplementary. Best Reviewer PtE5 --- Reply to Comment 1.1.1: Comment: Thanks for your thoughtful feedback and for adjusting the rating in light of our responses. We are encouraged by your support and will certainly incorporate the additional ablation studies into our paper or supplementary material as suggested. Thank you once again for your constructive criticism and valuable feedback. --- Rebuttal 2: Comment: Dear Reviewer PtE5, We hope this message finds you well. As we approach the culmination of the discussion period, we would like to extend our heartfelt thanks for your insightful feedback and the time you have invested in evaluating our work. We are keen to ensure that all your queries and concerns have been satisfactorily addressed. Should there be any remaining points that you believe require further clarification or discussion, we kindly urge you to share them with us at your earliest convenience. It is our sincere hope that our responses thus far have successfully clarified the issues you raised. If you find that your concerns have been resolved, we would be immensely grateful if you would consider adjusting your score to reflect the improvements and the efforts we have made to refine our paper. Thank you once again for your time and consideration. Warm regards, The Authors
Summary: The authors aim to leverage 3D Gaussian Splatting with a few additions and changes in order to perform HDR view synthesis. The authors propose that with the addition of cone-scattering to the Structure from Motion initialization, replacement of Spherical Harmonics with a simple MLP for color representation, and extra distance regularizations, an HDR scene may be very quickly optimized to accurately render novel views in real-time. The approach is evaluated on the data used by RawNeRF (CVPR 2022). Strengths: The paper proposes a reasonable set of additions in order to adapt 3DGS for the purpose of HDR scene reconstruction and view synthesis. Though the submission emerges alongside other comparably-aimed papers, the approach proposed is distinct and its authors provide sufficient evidence of the efficacy of their method. - Originality It is probably not a very distant leap to apply 3DGS to avenues of work done on NeRF, though this work does go further to involve a few novel additions to the method being adapted, in order to address shortcomings that would be present otherwise. Other works with the same aim appear to be emerging at this time, though the approach taken by the authors, affecting the gaussian initialization and performing distance regularizations, appears distinct. - Quality With consideration to the supplementary materials, the submission adequately covers most points of concern. Claims made are supported by the provided experimental results. Included ablation study results show qualitative and quantitative contribution for each improvement proposed by the authors. Weaknesses: - Clarity For the most part, the writing is clear and straightforward. The context for previous efforts improved upon and problems being solved is well provided. Relevant equations are present and adequately described. Implementation and environment details are well expanded upon in the supplementary materials. However, there are a small number of typos remaining in the submission and supplement. Please make sure to correct these. - Significance The paper is one of several that aim to introduce HDR Gaussian Splatting, though with regards to other work released within this month and the last, no code has been released as of yet, so no direct comparisons between results may be made. However, the stated improvements relative to these other works appear to come out on top. Technical Quality: 3 Clarity: 3 Questions for Authors: It is not explicitly written, but could it be appropriate to briefly address directly the tradeoff in FPS/Training time with accuracy in regards to the comparison between the proposed method and compared 3DGS methods? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The supplementary material sufficiently addresses the limitations and potential negative societal impact of the submitted work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our structural regularization and the thoroughness of our experiments! Below, we address the specific points. 1. **About the typos**: thanks for pointing them out! We will definitely fix them in the next version of our paper. 2. **About the difference between LE3D and concurrent papers**: we have discussed the differences between LE3D and other methods in the global rebuttal part, please refer to details there. The main differences between LE3D and other methods lie mainly in two aspects: a) LE3D can perform blind denoising on the data without the need for noise model calibration, which broadens its range of applications. b) LE3D places more emphasis on structural information, making it more suitable for downstream editing tasks. 3. **The tradeoff in FPS/Training time**: While our assertion may seem somewhat overstated, we confidently advocate for the consistent selection of LE3D, with the disclaimer that this comparison is made relative to LDR-GS, HDR-GS, and RawGS as discussed in our main paper. The main reasons are as follows: a) Visual quality: As shown in the main paper and supplementary figures, LE3D exhibits less noise compared to other methods (e.g., floaters in Fig. 1, 3) and better color shown in Fig. 13. b) Structural information: LE3D captures structural details better than other methods and much better than RawGS (Fig. 13, 14, 15), which benefits downstream tasks, as demonstrated in the defocus tasks shown in Fig. 5 and 9. We hope our response addresses your concerns. You can find a reiteration of our motivation, contribution, and potential application in the first section of the global rebuttal. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Me2T, Thank you for your recognition and for offering the first 'weak accept' for our LE3D! We will certainly include a discussion on the differences between LE3D and other concurrent works in the next version. As the discussion period draws to a close, we would like to ensure that all your queries and concerns have been fully addressed. If there are any points that require further clarification or additional information, we are more than happy to provide it. Should you find that our responses have satisfactorily resolved your reservations, we would be deeply grateful if you could consider adjusting your score to reflect the improvements. Best regards, The Authors
Rebuttal 1: Rebuttal: First and foremost, we would like to express our gratitude to all the reviewers and the Area Chair for their diligent review. We sincerely appreciate your recognition of our writing, your appreciation of the effects shown in our demo video, and your identification of typos and weaknesses in our paper. We look forward to improving LE3D with all of your help and making it a better paper, project, or maybe a product. However, given that the confidence scores from our reviewers are predominantly 3, we wish to reiterate to the reviewers and the ACs the motivation, contribution, and potential impact of our LE3D. **Reiterate the motivation, contribution, and potential impact.** We are aiming at real-time HDR view rendering and downstream editing tasks. With the recent advancements in 3DGS-related work, we attempt to leverage its capabilities to achieve real-time rendering and fast training. However, directly applying 3DGS techniques to noisy raw images presents numerous challenges, such as poor SfM estimation, limited representation capacity of SH and inaccurate structure reconstruction. The primary contribution of LE3D is addressing these issues, enabling 3DGS to excel on noisy RAW images. It enables tons of applications for post-processing, including HDR rendering, refocusing, tone-mapping changes and so on. Our potential impact lies in the applications of AR/VR technology and related post-production techniques in film and television (reframing and postprocessing). Additionally, it can bring traditional computational photography techniques into the realm of 3D reconstruction. As demonstrated in our demo video, our current progress already supports various subsequent processing techniques in near real-time: HDR tone mapping, refocusing, and color temperature tuning. Therefore, we believe that the potential impact of LE3D is substantial. **The Difference Between LE3D and Other Concurrent Methods.** As discussed in the introduction, current HDR reconstruction methods mainly include two approaches: 1) reconstruction from multi-exposure 8-bit RGB images, and 2) direct reconstruction from noisy under/multi-exposure data. The first approach has high data requirements, necessitating changes in exposure during capture, while the second approach needs to overcome the impact of noise. The most recent concurrent 3DGS methods on similar issues include HDR-GS[1] (RGB), HDRSplat[2], Raw3DGS[3] (RAW). Among them, HDR-GS[1] does not start with RAW data and is of different settings from our method. HDRSplat[2] and Raw3DGS[3] also has substantial differences in their algorithm framework from ours: 1. The Necessity of a Denoising Network: Both HDRSplat[2] and Raw3DGS[3] adopt multi-stage approaches, requiring noise model calibration, denoising, and reconstruction after data collection. This necessitates extra data, training, and tedious noise model calibration when transferring their methods to other devices. In contrast, though we do not have specific module for denoising, our LE3D algorithm is blind to noise, making it potentially adaptable to any device. 2. Better structure for downstream tasks: Unlike other methods, our focus is not only on the reconstruction of visual effects (denoising) but also on the reconstruction of scene structure, due to our CSI initialization and regularizations. This makes our LE3D more suitable for downstream tasks like refocusing. [1] Cai, Yuanhao, et al. "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting." arXiv preprint (2024). [2] Singh, Shreyas, Aryan Garg, and Kaushik Mitra. "HDRSplat: Gaussian Splatting for High Dynamic Range 3D Scene Reconstruction from Raw Images." arXiv preprint (2024). [3] Li, Zhihao, et al. "From Chaos to Clarity: 3DGS in the Dark." arXiv preprint (2024). Next, we will elaborate on a common question raised by several reviewers: **What limits the final representation capability of SH on linear color space.** The answer is the extremely high dynamic range in linear color space. 1. We analyze the final convergence statistics for windowlegovary (multi-exposure training with a very high dynamic range). We calculate the mean and variance of each gaussian color from each training view, as shown in Fig. 1 (d) in the rebuttal PDF. It can be observed that: a) The numerical range using color MLP can learn a very high dynamic range in linear color space (maximum value over 200), while the numerical range using SH collapses completely (contains negative values) and fails to learn the especially high dynamic range in linear space. b) Additionally, the final color using SH is significantly affected by the viewing direction (as reflected in the extremely large variance), which is also a manifestation of its instability and finally causes the less color representation ability in linear color space. 2. As for the training process, the SH cannot optimize the color well in the early stages of training, as shown in Fig. 4 (e) in the main paper. This also leads to poor structural optimization before 15,000 iterations (before 15,000 is where 3DGS is used for densifying/cloning), as illustrated in Fig. 4 (c) in the main paper. Poor structure causes color optimization to fail after 15,000 iterations (at this stage, 3DGS mainly optimizes color and does not perform densifying/cloning), further affecting the visual effects, as shown in Fig. 1 (left) in the main paper and Fig. 7 in the supplementary. Both of the above situations affect the representation capability of SH at final convergence. Details can be found in Table 2 in the rebuttal PDF, which shows MLP always outperforms SH by a large margin. Pdf: /pdf/2363c63aca5e9e8896e002e3760ba45ef95e0d3d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games
Accept (poster)
Summary: This paper introduces a new variant of fictitious play where agents respond to the last actions of team members. The authors need to improve their academic writing skills largely. The expression between paragraphs and sentences lacks logic and consistency. Moreover, the paper does not list the challenges and gaps in current research and why this research can solve the recent issue. Furthermore, from my point of view, I did not see particular novelty or practical value in this research. Strengths: This paper introduces a new variant of fictitious play where agents respond to the last actions of team members. Weaknesses: The authors need to improve their academic writing skills largely. The expression between paragraphs and sentences lacks logic and consistency. Moreover, the paper does not list the challenges and gaps in current research and why this research can solve the recent issue. Furthermore, from my point of view, I did not see particular novelty or practical value in this research. Technical Quality: 1 Clarity: 1 Questions for Authors: I suggest the author reorganize the paper and improve the academic writing skills. Confidence: 4 Soundness: 1 Presentation: 1 Contribution: 1 Limitations: I suggest the author reorganize the paper and improve the academic writing skills. Flag For Ethics Review: ['Ethics review needed: Data quality and representativeness'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We disagree with the reviewer’s claim that expressions between paragraphs and sentences lack logic and consistency except possibly for very few instances. We would appreciate it if the reviewer could explicitly refer to any specific examples so that we can address them. The first two paragraphs of the introduction highlight the challenges and the gaps in the literature and how this paper can solve these issues. **What are the challenges and gaps in the current research?** * There are **no results** addressing whether Team-Nash equilibrium (TNE) can predict the outcome of interactions among multiple teams of autonomous decision-makers in non-cooperative environments despite the recent interest in multi-team games and the efficient computation of TNE.  * Joint team responses can require the correlation of the responses among team members. Therefore, the widely studied fictitious play dynamics do not necessarily reach TNE even in two-team zero-sum games. There are also **no results** achieving such correlation without communication or ex-ante coordination for multi-team games. **Why can this research address this issue?** * We **present a slight adjustment** of the fictitious play dynamics in the direction of log-linear learning, another widely studied dynamics. This adjustment ensures that the dynamics involve simple behavioral rules consistent with the autonomous agents’ adaptation according to their self-interests, and team members can still learn to coordinate in the best team response against other teams without communication or ex-ante coordination. * We **prove the almost-sure convergence** to TNE in a broad class of multi-team games, including the two-team zero-sum and multi-agent zero-sum polymatrix games as special cases. **What are the practical novelty or value of this research?** * Our results **strengthen the predictive power of TNE** for multi-team games, **justify the recent interest** in efficient computation of TNE for multi-team games, and **provide a theoretical foundation** to design/regulate multi-team games under TNE.  * We have exemplified the practical impact of the results on an **airport security scenario** in the global response. The example shows that designing airport security for the worst-case of the team-minimax equilibrium against autonomous attackers that do not coordinate is not too conservative since the attackers can learn to coordinate in the worst response by following simple behavioral rules, as in the Team-FP dynamics.  * Due to the **scalability** of the Team-FP dynamics, teams of agents can also follow the Team-FP dynamics to learn the coordination in the best team response against other teams without the burden of communication and ex-ante coordination, especially in **large-scale systems with networked interconnections**. The global response includes such a large-scale numerical example. Regarding the ethics flag, we have **only used randomly generated games** in our paper without external data and provided our codes as supplementary material. There cannot be ethical issues regarding data representativeness and quality.
Summary: This paper addresses the complex problem of multi-team games by introducing a new variant of fictitious play called Team-FP. The method aims to enable teams of self-interested agents to reach Team-Nash Equilibrium (TNE) in multi-team games, with a particular focus on zero-sum potential team games (ZSPTGs). The authors present a rigorous convergence analysis and extend the Team-FP dynamics to multi-team Markov games. Extensive simulations are provided, comparing Team-FP with other algorithms to demonstrate its effectiveness and practical applicability. Strengths: The paper introduces Team-FP, a novel variant of fictitious play specifically designed for multi-team games. The approach incorporates inertia in action updates and agents’ responses to the last actions of team members, which enhances team coordination. This creative extension of classical fictitious play is valuable in advancing the understanding of team dynamics in multi-agent settings. The paper is well-structured and organized, making it easy to follow the progression from the introduction of the problem to the presentation of the proposed methods. Weaknesses: Despite the novelty of the Team-FP approach, the modifications to classical fictitious play are not significantly groundbreaking. The methods, while creative, represent incremental improvements rather than major innovations in the field. I am not familiar with the convergence proof; thus, I cannot and do not have the time to verify the proof. The authors put a lot of effort into the theoretical part. The experiment setting and results are simple. For a theoretical paper, it is better to prove the convergence through experimental results. Thus, I think the paper is below the acceptance bar of NeurIPS. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How can we get the Eq.4? 2. Can you provide experimental evidence to support your convergence proof? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and constructive comments. Firstly, we want to highlight that the equation (4) is inherited from polymatrix games. For example, the equation always holds for two-team games. As discussed in the introduction, the polymatrix structure (or network separable interactions) is essential for generalizing two-team zero-sum games to more than two teams. **[Novelty of the Team-FP]** We see this *slight* adjustment of the fictitious play dynamics in the direction of log-linear learning as a *strength*, justifying it as a simple behavioral rule consistent with the agents’ autonomous adaptations based on their self-interests. Notably, both dynamics have been studied extensively to model the interactions of human-like self-interested learners in the learning-in-games and behavioral-game-theory literature. Therefore, by showing the convergence of Team-FP dynamics to Team-NE, we provide **behavioral support for the predictive power of Team-NE** in multi-team environments (with uncoordinated team members), as exemplified in our **airport security scenario** in the global response. These convergence results also **justify the design of sophisticated equilibrium computation methods** to predict the outcome of such multi-team interactions. Furthermore, the simplicity of the dynamics ensures its **scalability for large-scale games** with networked interconnections, as discussed in the global response and its supplementary PDF. **[Experimental Support for Convergence]** Our extensive numerical simulations show convergence to Team-NE approximately as the Team-Nash Gap decays to near zero. The inherent exploration in softmax response induces the approximation error, and such occasional deviations from the greedy best response play an important role in escaping suboptimal team responses. --- Rebuttal Comment 1.1: Title: thanks and raise score Comment: Thank you for your response to my comments. I have read your rebuttal and am happy to raise the score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank the reviewer for reviewing the rebuttal and raising the score.
Summary: This paper introduces Team-Fictitious Play (Team-FP) dynamics as a novel approach for teams of self-interested agents to converge to Team-Nash equilibrium in multi-team games. The study focuses on games where multiple teams interact strategically, aiming to maximize their collective utilities. For this purpose, this paper (1) introduces Team-FP as a method for teams to converge to Team-Nash equilibrium in multi-team games; (2) extends the convergence analysis of Team-FP dynamics to multi-team Markov games; (3) demonstrates the effectiveness of Team-FP dynamics through theoretical analysis and empirical evaluations in various multi-team game scenarios. Strengths: 1. The introduction of Team-FP represents a novel approach to address the challenge of teams reaching equilibrium in multi-team games. 2. The detailed numerical analysis and simulations conducted to evaluate the behavior of Team-FP dynamics in various multi-team game settings reflect the thoroughness and quality of the method. Weaknesses: 1. The notations are confusing and it's hard to keep up. For example, 'agent index' and 'team index' both use lowercase letters. 2. Limited discussion on computational complexity. The paper could provide more insights into the computational complexity of implementing Team-FP dynamics in large-scale multi-team games. Discussing the scalability of the approach, potential bottlenecks, and computational efficiency considerations would be beneficial for understanding the practical feasibility of deploying Team-FP in complex settings. Technical Quality: 3 Clarity: 2 Questions for Authors: The authors provide some experiments, but it would be helpful to see more extensive experiments, including a larger-scale multi-team game to demonstrate the effectiveness of the algorithm in a more complex setting. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough reading of our paper and constructive comments. We appreciate the feedback regarding the notation. We have reserved uppercase letters to denote sets; e.g., $A^1$ denotes agent 1’s action set. For the explicit exposition, we underlined the parameters related to teams, e.g., $\underline{a}^1$ (and $\underline{\pi}^1$) denotes the joint team action (and team strategy) of team 1 while $a^1$ (and $\pi^1$) denotes agent 1’s local action (and local strategy).  Since the Team-FP dynamics have linear updates, they are **scalable, especially for sparse networked interconnections**, as discussed in Remark 3.1 of the paper. For example, if at most 2-hop neighbors of the agents affect their reward (as in Example 2.1 of the paper), the computational complexity grows linearly with the number of teams and the maximum number of agents in a team. Furthermore, the complexity grows exponentially with the number of 2-hop neighbors of agents, where the base of the exponent is the number of local actions.  To illustrate the scalability of the Team-FP dynamics, we have conducted experiments on a **large-scale game** as requested by the reviewer and described its details and the numerical results in the global response and its supplementary PDF. We also highlight that the global response includes the **practical application of an airport security scenario** modeled as a multi-team game. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications and extra experimental results. I'd like to raise my score. However, it would be better if the authors could simplify their notations. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank the reviewer for reviewing the rebuttal and raising the score. As recommended, we will simplify the notation, e.g., use bold letters for team-related parameters, once the paper gets updated.
Summary: In this work, the authors introduce a novel variant of virtual play, referred to as **Team-Fictitious Play (Team-FP)**, aimed at assisting self-interested agents within teams to reach **Team Nash Equilibrium (TNE)** in multi-team games. The paper focuses on zero-sum potential team games (ZSPTGs), where teams interact pairwise, but the payoffs to team members are not necessarily identical. The main contributions include the introduction of inertia in action updates and responses to the last actions of team members, which are crucial for team coordination. The authors provide theoretical convergence guarantees and validate the efficacy of the approach through extensive simulations. Strengths: 1. The introduction of Team-FP fills a gap in the multi-team game theory literature, particularly in the context of zero-sum potential team games. 2. The paper offers rigorous theoretical analysis and practical insights, including convergence proofs and error bounds. 3. Extensive simulations compare Team-FP with other algorithms, demonstrating its effectiveness and exploring the impact of various parameters on convergence speed. 4. The grammar and expression are accurate and professional. Weaknesses: - Adding more background or appendix sections on Nash equilibrium and related solution algorithms would greatly help the reader's understanding. - The theoretical analysis section is quite technical; including more intuitive explanations and illustrations could be beneficial. For instance, adding a diagram in Section 3 to visually depict the workflow of Team-FP might help. - A more detailed analysis of the parameters used in Team-FP and their sensitivity to performance could provide deeper insights. - While the paper focuses on zero-sum potential team games, discussing how Team-FP could be adapted or extended to other types of multi-team games would be valuable. - The numerical experiments section could benefit from more qualitative analysis. - The discussion on practical applications is not sufficiently thorough; adding more experiments and analysis in specific application scenarios would be beneficial. - Section 5 could include experimental comparisons with other multi-team learning algorithms. Technical Quality: 3 Clarity: 3 Questions for Authors: All my questions are written inside the weakness part. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper explicitly discusses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thorough review and constructive comments. We can gladly include further clarifications on the points raised, which will improve our paper’s accessibility. In the following, we address the reviewer’s questions: **[How can we visualize the learning dynamic and the solution approach?]** We have included Figure 1 to visually depict the Team-FP dynamics and the fundamental idea in our technical analysis in the PDF attached to the global response. **[How can we extend Team-FP to other classes of games?]** Similar to the FP dynamics, Team-FP dynamics are **game agnostic**. In other words, agents do not know whether the other teams have aligned or misaligned objectives. Furthermore, agents do not know whether other team members have identical or different yet aligned objectives. As discussed in the conclusion section, various essential classes of games have the fictitious play property. The generalization of our approach to such multi-team games is an interesting future research direction. In the illustrative examples section, we have discussed several other types of multi-team scenarios beyond zero-sum potential team games, such as **$2\times N$ general-sum game** and **potential-of-potentials game**. As observed in these examples, we expect the almost-sure convergence of Team-FP for the games where FP converges if teams were single agents since in the Team-FP dynamics, team members can learn to coordinate in the (evolving) best team response and can act as if they are a single decision-maker. **[How sensitive is the performance to the parameters used?]** In the illustrative examples section, we have numerically examined and discussed the parameters $\tau$ and $\delta$. Our results indicate that decreasing $\tau$ (i.e., reducing exploration) leads to a lower Team-Nash Gap at the steady state. However, exploration plays an essential role in escaping suboptimal team responses. Increasing $\delta$ (or reducing friction) can accelerate convergence. However, friction in the action updates is critical for reaching the optimal team response. **[How does Team-FP perform in practical applications?]** The reviewer raises a significant concern. In the rebuttal period, we have implemented a real-life application scenario to address this question. To this end, we generalized the widely studied **airport security scenario** to multi-team games. Attackers have aligned objectives due to their adversarial nature. Attackers are autonomous decision-makers with different objectives. Though their objectives can differ to a certain extent, they are aligned in maximizing the defender’s cost due to their adversarial nature. This makes our multi-team formulation an ideal model for such cases. We have described the game and presented the numerical results in the global response and its supplementary PDF. Our analysis shows that **defending our systems according to the worst-case as if the attackers are coordinated centrally is important since the attackers can learn such coordination based on simple behavioral rules consistent with their self-interests, as in the Team-FP dynamics.** **[How does Team-FP perform compared to other multi-team learning dynamics?]** The existing results on multi-team games are primarily computational. On the other hand, in this paper, we present simple behavioral rules (slight adjustments of the widely studied fictitious play dynamics) that can reach Team-NE to **strengthen the predictive power of Team-NE and justify the algorithms developed to compute Team-NE efficiently** for multi-team games, as exemplified in our airport security scenario. Furthermore, their simplicity makes the dynamics scalable for **large-scale problems**, as the global response discusses. To our knowledge, Team-FP is the first multi-team learning dynamic that does not rely on any (ex-ante) communication among team members. Notably, the Fictitious Team Play (FTP) algorithm, presented by [Farina et al., "*Ex ante* coordination and collusion in zero-sum multi-player extensive-form games," In NeurIPS, 2018], is related, and we have compared FTP and Team-FP dynamics in the global response and its supplementary PDF.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable time and constructive comments. Based on Reviewer Jh6E’s request, we have illustrated the Team-FP dynamics and the fundamental idea of the proof in Figure 1 of the attached PDF. Furthermore, based on the other comments, we have conducted several new experiments despite the limited time and computational resources. We have also provided the results of these experiments in the PDF. In the following, we describe these experiments in more detail. **[A Practical Application]** We can model an **airport security scenario** as a two-team game between defender and attacker teams. Note that airport security has been studied from a game-theoretical lens extensively, and the findings have been **deployed in real life**, e.g., see [An et al., “PROTECT - A deployed game theoretic system for strategic security allocation for the United States Coast Guard,” AI Magazine, 2013]. In our example, we consider a single security chief in the defender team and three different intruders in the attacker team, as illustrated on the left-hand side of Figure 2 of the PDF. The chief can defend a gate at the expense of some cost. Intruders autonomously decide whether to attack a specific gate or remain idle. The intruders receive positive (or negative) payoffs if they attack an undefended (or defended) gate. Correspondingly, the chief gets a positive (or negative) payoff if intruders attack defended (or undefended) gates. The security chief has $2^6=64$ actions, while each intruder has $6+1=7$ actions. We have conducted $50$ independent trials and presented the evolution of the average of the Team-Nash Gap along with the standard deviations as shaded areas on the right-hand side of Figure 2. From a higher level, this example shows that **team-minimax equilibrium can predict the outcome of airport security games for the likely scenario of different uncoordinated attackers. It also justifies the algorithms developed to compute team-minimax equilibrium efficiently.**  **[A Large-Scale Example]** We have simulated a three-team game with nine agents per team and $27$ agents. Note that this configuration has a huge joint action space of $2^{27}$ action profiles. We have conducted ten independent trials. We have plotted the evolution of the Team-Nash Gap in Figure 3 of the PDF. Despite the scale of the problem, our experiments show that the empirical averages of the team actions converge to the Team-Nash equilibrium **at a comparable rate when the networked interconnections are sparse**, as depicted in the top right of Figure 3.  **[A Comparison with Other Multi-team Learning Dynamics]** The existing learning dynamics for multi-team games are very limited as the literature mainly focuses on efficient computation of equilibrium rather than identifying whether equilibrium can **emerge** as an outcome of self-interested adaptations. **One exception** is the fictitious team play in [Farina et al., "*Ex ante* coordination and collusion in zero-sum multi-player extensive-form games," In NeurIPS, 2018]. In our setup, their fictitious team play dynamics (developed for extensive-form games) become the classical fictitious play dynamics where the entire team acts as a single decision-maker due to the ex-ante coordination among team members. Our paper examined this scenario in Figure 2(a) as a full coordination setup. In Figure 4 of the PDF, we have replotted this scenario to highlight the comparison.  Although the fictitious team play reaches equilibrium faster due to the ex-ante coordination, such coordination brings in additional burdens beyond time. In contrast, Team-FP dynamics are scalable without such coordination burden, as shown in the large-scale example above. Figure 2-(a) in the paper also shows the trade-off between coordination burden and learning speed by including the half-coordination scenario. Pdf: /pdf/83557426449f017fd132fb00be4c153bc172cf37.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robust Neural Contextual Bandit against Adversarial Corruptions
Accept (poster)
Summary: This paper studies the problem of contextual bandits with neural function approximation faced with adversarial corruptions. It proposes an algorithm named R-NeuralUCB, which can improve the robustness of neural contextual bandit training. It provides regret analysis and conducts experiments to show the advantage of the new algorithm. Strengths: 1. This paper proposes a new algorithm for neural contextual bandits, which is based on a new technique that customizes individual sets of network parameters for each candidate arm. It can improve the robustness under adversarial corruption. 2. Theoretical analysis has been provided, with a robust regret bound dependent on the effective dimensions of the neural network. 3. Experiments are conducted on publicly available real-world data sets to show the better performance of the proposed algorithm. Weaknesses: 1. The computational cost is huge. Compared with NeuralUCB-WGD, the main difference lies in the separate neural networks for each candidate arm, which greatly increases the computation cost, as it needs to compute the gradient descent for every arm in each round separately. 2. In [1], Theorem 4.12 shows that if when no corruption, $Regret(T) \le R_T$, then when $C > \Omega(R_T/d)$, the algorithm will suffer $\Omega(T)$ regret. It seems contradictory with Theorem 5.6 for unknown $C$. 3. Theorem 5.6 relies on the tuning of parameter $\alpha$, to achieve that $\min w_{i,t}^\tau = \kappa^2, \forall t$, which is unclear how to achieve. [1] He et al. Nearly optimal algorithms for linear contextual bandits with adversarial corruptions Neurips 2022 Technical Quality: 2 Clarity: 2 Questions for Authors: See Weaknesses Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors have addressed the limitations and potential societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for your valuable questions and comments. Given the page limit of 6000 characters, we will try our best to provide our detailed response in the form of Q\&A. Please also see our manuscript for cited papers. *Thank you!* **Q1: Overall discussion on computational efficiency?** Please kindly refer to our "Global Response" for complementary discussions. To reduce the computational cost for obtaining arm-specific network parameters (line 9, Algorithm 1) in practice, for our experiments, we apply the warm-start GD strategy to strike a good balance between performances and computational costs. Pseudo-code and detailed descriptions are in Appendix Subsection B.6 due to page limit. In round $t\in \\{3, \dots, T\\}$, for each candidate arm $x_{i, t}, i\in [K]$, we can acquire its arm-specific parameters $\theta_{i, t-1}$ by fine-tuning existing trained parameters $\theta_{t-2}$ with a small number of training samples (received arm-reward pairs), instead of obtaining $\theta_{i, t-1}$ by training from scratch, i.e., starting from randomly initialized $\theta_{0}$ and training with a large number of samples. The intuition is inspired by meta-learning works [Finn et al.], where we consider each candidate arm as a "task". In this case, for round $t$, we can start from previously trained model parameters $\theta_{t-2}$, and adapt $\theta_{t-2}$ to each candidate arm (task), with a small number of training samples to obtain arm(task)-specific parameters. This helps R-NeuralUCB (1) reduce computational costs of calculating arm weights for GD, and keep inference time relatively stable across horizon $T$, since we apply a fixed number of samples for adaptation; (2) reduce GD iterations needed, as starting from $\theta_{0}$ will require more GD iterations for model convergence. Meanwhile, it is also a common practice for neural bandit works to adopt warm-start training in practice. Here, on one hand, neural bandit works generally formulate their GD training process by starting from randomly initialized $\theta_{0}$ (e.g, [76,74,7,8,60]), in compliance with theoretical analysis requirements. On the other hand, however, for their experiments and source code, it is common to train model parameters incrementally, by updating trained parameters from previous rounds, since it is more computationally efficient for real applications. Please see our "Global Response" PDF file for added experiments: (1) With experiments on warm-start vs. restarting from $\theta_{0}$ (Figure 1) given the same number of training (adaptation) samples, we see the warm-start process can improve performances. (2) With experiments based on increased number of arms $K$ ($K=50, 100$, Figure 2), we see that the warm-start strategy can lead to good performances while helping the inference time stay relatively stable. *[Finn et al.] Model-agnostic meta-learning for fast adaptation of deep networks.* **Q2: Relationship with the lower bound in (He et al., 2022)?** First, we would like to first recall that Theorem 4.12 of [35] relies on Assumption 2.1 of [35], which formulates a learning problem under linear contextual bandit settings. Then, in our Theorem 5.6, when $C=0$, we have the corruption-free regret of $\tilde{\mathcal{O}}(\tilde{d}\sqrt{T} + S\sqrt{\tilde{d}T})$. In this case, after setting $C = \Omega(R_{T}/d)$, we will end up with a regret bound of $\tilde{\mathcal{O}}\left( (\tilde{d}^{2}\sqrt{T} + S\tilde{d}^{3/2}\sqrt{T}) \cdot C \beta^{-1}d^{-1} \right)$. Here, we note that with the proof flow of Theorem 4.12 in [35] and learning problem specified by Assumption 2.1 of [35], our effective dimension term $\tilde{d}^{2}$ can possibly depend on horizon $T$, and grow along with $T$ [22]. As a result, although the regret bound only contains $\sqrt{T}$ terms, the overall order of regret bound can be equal or greater than $\mathcal{O}(T)$ due to effective dimension $\tilde{d}^{2}$, as discussed in [22]. This is distinct from linear bandits works, where regret bounds generally only consist of horizon $T$ and other $T$-independent terms, such as context dimension $d$ and fixed $T$-independent linear parameter norm $\|\theta^{*}\|_{2}$. Therefore, our Theorem 5.6 will not contradict with Theorem 4.12 of [35]. Meanwhile, we also note that for stochastic neural bandit works, the effective dimension term $\tilde{d}$ (or information gain for kernelized bandits [65,12]) is generally inevitable [76,74,7,43], since we need to bridge over-parameterized networks with the NTK-based regression model. Comparably, kernelized bandits [12,65,24] will also generally involve similar terms as costs of modeling the unknown non-linear reward mapping function. We appreciate the reviewer for your question. For readers' reference, we have included above discussions to Appendix Subsection B.7, as the supplement to our current discussion of the lower bound. **Q3: How to tune parameter $\alpha$ to satisfy the requirements of $\kappa$?** We first recall that in round $t\in [T]$, we have arm weights $w\_{i, t}^{(\tau)}, \tau\in [t-1], i\in [K]$. Based on line 184, we can also denote $w\_{i, t}^{(\tau)} = \min\\{ 1,\alpha\cdot \textsf{frac}\_{\tau}(x\_{i, t}; \mathcal{X}\_{t}, \bar{\Sigma}\_{t-1}) \\}$. Here, instead of deeming $\alpha$ as a fixed value across horizon $T$, we can consider $\alpha$ to be varying across different rounds, denoted by $\alpha\_{t}, t\in [T]$. With $\textsf{frac}\_{t}^{\textsf{min}} = \min\_{i\in [K], \tau\in [t-1]}[ \textsf{frac}\_{\tau}(x\_{i, t}; \mathcal{X}\_{t}, \bar{\Sigma}\_{t-1}) ]$, we can set each $\alpha\_{t} = \kappa^{2} / \textsf{frac}\_{t}^{\textsf{min}}, \kappa\in (0, 1)$. In this way, we can consequently have $\min\\{w\_{i, t}^{(\tau)}\\}\_{i\in [K], \tau\in [t-1]} = \kappa^{2}, \forall t\in [T]$. To improve the paper presentation, we have also added above explanations to the manuscript for readers' reference. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses, which are really helpful. I will keep my scores. --- Reply to Comment 1.1.1: Title: Thank you for the review and feedback! Comment: Dear Reviewer 1FL3, Thank you again for your insightful comments and questions, and we will definitely integrate our discussions into the manuscript as readers' reference. Meanwhile, please also kindly let us know if you have any further concerns or questions. We will be more than glad to provide additional explanations in a timely manner. Thank you again! Best regards, Authors
Summary: This paper proposes a novel neural contextual bandit algorithm, called R-NeuralUCB, to improve robustness against adversarial reward corruptions. The authors provide regret analysis for R-NeuralUCB under over-parameterized neural network settings, without the commonly adopted arm separateness assumption. The authors also empirically compare R-NeuralUCB with baseline algorithms on three real data sets, under different adversarial corruption scenarios. Strengths: The paper is well-written. To the best of my knowledge, this work proposes the first theoretical result for neural bandits with adversarial corruptions. The theoretical analysis seems solid and the experiments show that R-NeuralUCB outperforms baseline algorithms in the corruption setting. Weaknesses: My main concern is about the computational costs. Notice that R-NeuralUCB need to maintain a neural network for each arm. Thus I think R-NeuralUCB could be very expensive when $K$ is very large. I will be happy if the authors could provide some discussions on the computational costs and show some empirical results for larger $K$. Technical Quality: 3 Clarity: 3 Questions for Authors: In the experiments, NeuralUCB and NeuralTS have similar performance as R-NeuralUCB. Is it possible that these two methods also have sublinear regret under adversarial attacks? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for your valuable questions and comments. Given the page limit of 6000 characters, we will try our best to provide our detailed response in the form of Q\&A. Please also see our manuscript for cited papers. *Thank you!* **Q1: Overall discussion on computational efficiency? Performance with large $K$?** Please see our "Global Response" PDF file for added experiments. (1) With experiments on warm-start vs. restarting from $\theta_{0}$ given the same number of training (adaptation) samples (Figure 1), we see the warm-start process can improve performances. (2) With experiments based on increased number of arms $K$ ($K=50, 100$, Figure 2), we see that the warm-start strategy can lead to good performances while helping the inference time stay relatively stable. In practice, for our experiments, we can adopt the warm-start GD process (details and pseudo-code are in Appendix Subsection B.6) to reduce the computational cost, in terms of deriving the arm-specific network parameters (line 9, Algorithm 1). In round $t\in \\{3, \dots, T\\}$, for each candidate arm $x_{i, t}, i\in [K]$, we can acquire its arm-specific parameters $\theta_{i, t-1}$ by fine-tuning existing trained parameters $\theta_{t-2}$ with a small number of training samples (received arm-reward pairs), instead of obtaining $\theta_{i, t-1}$ by training from scratch, i.e., starting from randomly initialized $\theta_{0}$ with a large number of samples. Similar ideas are also applied by other bandit-related works (e.g., [9]). The intuition is inspired by meta-learning [Finn et al.] where we consider each candidate arm as a "task". In this case, for round $t$, we can start from previously trained model parameters $\theta_{t-2}$, and adapt $\theta_{t-2}$ to each candidate arm (task) with a small number of training samples to obtain arm(task)-specific parameters. This also reduces our computational costs of calculating arm weights for GD, and keep the inference time relatively stable across rounds. Meanwhile, it is also a common practice for neural bandit works to adopt warm-start training in practice. On one hand, neural bandit works generally formulate their GD training process by starting from randomly initialized $\theta_{0}$, in compliance with theoretical analysis requirements. On the other hand, however, for their experiments and source code (e.g, [76,74,7,8,60]), it is a common technique to train model parameters incrementally, by updating trained parameters from previous rounds, since it is more practical and computationally efficient for real applications. Therefore, we adopt the following approaches to strike a good balance between model performance and computational costs: - (1) Inspired by meta-learning ideas [Finn et al.], we utilize warm-start GD by adapting previously trained network parameters $\theta_{t-2}$ for each candidate arm with a small number of training samples, instead of training from $\theta_{0}$ with a large number of samples. Details are in Appendix Subsection B.6. - (2) Based on our formulation of the warm-start GD process (Algorithm 2, Subsection B.6), we will sample a fixed number of mini-batch training samples (i.e., received arm-reward pairs) for each candidate arm to calculate arm weights and perform GD. This is inspired by the idea of mini-batch warm-start GD training [9]. In this case, a fixed number of training samples can help the round-wise inference time stay relatively stable, without growing drastically along with $T$. *[Finn et al.] Model-agnostic meta-learning for fast adaptation of deep networks.* **Q2: Is it possible that Neural-UCB and Neural-TS also have sub-linear regret under adversarial attacks?** We agree with the reviewer that it is interesting and important to investigate the regret bound of vanilla neural bandit algorithms, in order to better understand the benefit of proposed methods. In particular, to offer some insights on regret bound of Neural-UCB, one possible route is to follow an analogous approach as the regret analysis of NeuralUCB-WGD. On the other hand, regarding Thompson Sampling based approaches, their analysis can be significantly different from our UCB-based methods, where we consider UCB-based analysis in this paper. Therefore, we consider the regret analysis of Neural-TS under corruptions as a part of future works. Here, the key idea is to quantify the impact of adversarial corruptions upon the confidence ellipsoid around trained parameters. By following the derivations in Lemma F.1, denoting the corruption-free confidence radius as $\tilde{\gamma}\_{t-1}$ in round $t$, we can have the corrupted confidence ellipsoid for Neural-UCB as $\mathcal{C}\_{t-1} = \\{ \theta: \\|\theta - \theta\_{t-1}\\|\_{\Gamma\_{t-1}} \leq \gamma\_{t-1} / \sqrt{m} \\}$, where $\gamma\_{t-1} = \tilde{\gamma}\_{t-1} + \mathcal{O}(CL\lambda^{-1/2})$. This result is derived by applying $w\_{\tau}=1, \tau\in [t-1]$ and the fact $\sum\_{\tau\in [t]}|c\_{\tau}| \leq C$, as well as Lemma G.2 and the initialization of the gradient covariance matrix $\Gamma$. As a result, following the proof flow of Lemma 5.3 in [76], we can have regret upper bound $\tilde{\mathcal{O}}(\tilde{d}\sqrt{T} + \sqrt{S\tilde{d}T} + CL\sqrt{\tilde{d}T/\lambda})$, which will introduce an additional $\tilde{\mathcal{O}}(\sqrt{T})$ to the corruption-dependent term. Note that although it is possible for Neural-UCB to have a sub-linear cumulative regret when $C$ is a small constant, our R-NeuralUCB and NeuralUCB-WGD can remove the direct dependency of horizon term $T$, and manage to achieve non-trivial $\tilde{\mathcal{O}}(C\tilde{d}\beta^{-1})$ and $\tilde{\mathcal{O}}(C\tilde{d}^{3/2} + C\tilde{d}\sqrt{\lambda}S)$ respectively for corruption-dependent terms. For readers' reference, we have also added above discussions to the manuscript Appendix. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I will keep my score. I hope the authors will include a discussion of computational efficiency in the final version. --- Rebuttal 2: Title: Thank you for the review! Comment: Dear Reviewer Fhvw, Thank you again for your insightful review and comments, and we will definitely include detailed discussions related to computational efficiency to our final manuscript. Thanks again. Best regards, Authors
Summary: This paper presents R-NeuralUCB, a neural-based network UCB algorithm for robustness under adversarial rewards corruptions in stochastic $K$ multi-armed contextual bandits. Based on NeuralUCB [1], before the arm pulling, R-NeuralUCB additionally optimizes a context-aware gradient descent for each arm by using an objective function. The first term of this function is the weighted-arm uncertainty information from the cumulative observed training samples. The second term is inspired by existing works on enhancing model robustness with the regularization technique between current and initial weights. Contribution includes: - A novel R-NeuralUCB algorithm, where each arm weight is formulated by the weight between gradient norm UCB w.r.t. context $x$ and weights $\theta$ of training and candidate arm. This intuitively means when the uncertainty w.r.t. candidate's arm is high, this weight will be low, encouraging the model to focus on the regularizer to avoid over-fitting. Conversely, if the uncertainty w.r.t. candidate's arm is low, yielding the weight will be high, pushing the model focus on the observed training samples - A theoretical upper bound for the proposed algorithm, where the bound for R-NeralUCB is $\tilde{\mathcal{O}}(\tilde{d}\sqrt{T}+C\beta^{-1}\tilde{d}\)$, where $\tilde{d}\$ is the dimensions of NTK matrix, $T$ is the time-horizon, C is the adversarial corruption level C, and $\beta$ is the data-dependent gradient deviation term. Additionally, the proof of this bound does not require the NTK matrix to be positive-definite. - Experiments show the proposed algorithm has a lower cumulative regret than Neural-UCB and other baselines across different datasets and reward corruption types. Strengths: - This paper has clear mathematical notations, is very well-written, and is clear to understand the important aspects of the algorithm. - I like the motivation to improve the robustness of neural-UCB-based methods under reward corruption because of their real-world applications. - The proposed algorithm also makes sense to me, the weighing arm is based on the uncertainty of the upper confidence bounds, reflecting the reward estimation uncertainty of the model. Therefore, we can use these weights to control the fitting of the cumulative observed training data and prevent impacts caused by adversarial corruption. - The theoretical contribution of this paper is good, yielding a solid R-NeralUCB algorithm. I appreciate the theory of this paper for two main reasons. Firstly, in the scope of the neural-based contextual bandits, I think this is the first work to provide a regret upper bounds with the reward corruption value. Secondly, while neural-based UCB often assumes the NTK matrix in Def. 5.1 is positive-definite, the theoretical analysis of this paper does not require this assumption. - The authors additionally provide a base algorithm NeuralUCB-WGD, which uses the weighted GD with Neural-UCB. Theoretical analysis and experimental results of NeuralUCB-WGD are also provided sufficiently. - Experiments are also elevated across different settings, including datasets, reward corruption types, and running seeds. Weaknesses: - My biggest concern about R-NeuralUCB is its computational inefficiency, causing a big limitation in the real world with neural-based contextual bandits. Specifically, while NeuralUCB has been criticized be inefficient by the exploration with UCB performed over the entire DNN parameter space and its gradient [2], R-NeuralUCB is even less inefficient than NeuralUCB for the following reasons: - Firstly, R-NeuralUCB requires different neural network model weights for different arms, so R-NeuralUCB is very expensive when $K$ is high. - Secondly, for each time $t \in [T]$, R-NeuralUCB requires additionally optimizing the Context-aware GD objective function in Eq. 5 $K$ times. - Thirdly, this Context-aware GD objective function requires computing each weight for each cumulative data, and the numerator of this weight requires finding the min across $\mathcal{X}_t$, meaning that when $t$ and $K$ increase, the computational cost also increases. - The proposed algorithm requires to define some non-trivial hyper-parameters such as $\alpha$ in Eq. 4 and $\lambda$ in Eq. 5. - Similar to other neural-networking-based contextual UCB papers, the proofs must assume the activation of the network structures is ReLU, the neural network model is over-parameterized, and the dimension of the weight matrix at middle layers $l$-th has the same size $m \times m$. As also mentioned by the authors, the theory lacks the lower bound for the regret of R-NeuralUCB. - Lack of experiment settings, e.g., comparison with other baselines [2, 3, 4], synthetic (i.e., the true reward function is known) and other real-world datasets [1], and a longer time horizon $T$. In some experiments (e.g., Amazon dataset), R-NeuralUCB does not significantly outperform other baselines. - Miscellaneous: In Figure 1 on MovieLes Dataset right, the cumulative looks are not significantly lower than others. I would recommend the authors zoom in on it by limiting the y-axis value to 400, so the readers can see your better regret more easily. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Can you draw two figures, including the first bar chart for the model size (i.e., number of neural network parameters) comparison, and the second line graph for the inference speed (i.e., latency) comparison at each time $t$? 2. While NeuralUCB [1] set $T=15,000$ in MNIST, why do the authors only limit $T=10,000$? Is this because of the computational reason? Also, do you have any comments to improve the computational efficiency of your algorithms? 3. The paragraph from L-207 to L-212 does not make sense to me. Specifically, why replacing the initial weight $\theta_0$ by the warm-start GD can reduce computational cost? Do you mean the computational cost regarding the update step in L-15 in Algorithm 1? Does this replacement affect the loss function in Eq. 5? Do you have any empirical results in this regard (i.e., cumulative regret results between with and without the replacement)? 4. Can you make a comparison in corruption-free experiments, i.e., set $C=0$? I think this will test whether your Context-aware GD in Eq. (5) hurt NeuralUCB or not. Also in your experiments, at each time $t$, how do you set whether an arm is attacked or not? How do you think about the context corruption like using MNIST-C? 5. Have you considered the stochasticity of $C$ in your regret upper bound? Empirically, instead of fixing $C$ across time horizon $T$, $C$ can be stochastic, we also can empirically test them by (e.g., set $C$ to be monotonic increase, decrease, or randomly distributed by some mean and variance). 6. How do you make the neural network model in the experiment to be over-parameterized? Since the true reward function is unknown, how do you select the sub-gaussian value $\nu$ in your algorithm? Since $\beta$ in Theorem 5.6 depends on the selection of $\alpha$ in Eq. 4, does this mean that we can fine-tune the regret in the experiment by searching $\alpha$? References: [1] Zhou et al., Neural Contextual Bandits with UCB-based Exploration, ICML, 2020. [2] Xu et al., Neural Contextual Bandits with Deep Representation and Shallow Exploration, ICLR, 2022. [3] Bogunovi et al., Corruption-Tolerant Gaussian Process Bandit Optimization, AISTATS, 2020. [4] Zhang et al., Contextual Gaussian Process Bandits with Neural Networks, NeurIPS, 2023. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Please see my comments on the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for your valuable questions and comments. Given the page limit of 6000 characters, we will try our best to provide our detailed response in the form of Q\&A. Please also see our manuscript for cited papers. *Thank you!* **Q1: Discussion on computational efficiency? Why use warm-start GD?** *Due to page limit, please kindly refer to our "Global Response" for complementary discussions.* First, we would like to mention that the warm-start GD process (detailed descriptions and pseudo-code are in Appendix Subsection B.6) here is to reduce the computational cost for calculating arm-specific network parameters (line 9, Algorithm 1) in practice. In round $t$, for each candidate arm $x_{i, t}, i\in [K]$, we can acquire its arm-specific parameters $\theta_{i, t-1}$ by fine-tuning existing trained parameters $\theta_{t-2}$ with a small number of training samples (received arm-reward pairs), instead of obtaining $\theta_{i, t-1}$ by training from scratch, i.e., starting from randomly initialized $\theta_{0}$ with a large number of samples. The intuition is analogous to meta-learning works [Finn et al.], where we can consider each candidate arm as a "task". In this case, for round $t$, we can start from previously trained model parameters $\theta_{t-2}$, and adapt $\theta_{t-2}$ to each candidate arm (task) with a small number of training samples to obtain arm(task)-specific parameters. This helps R-NeuralUCB (1) reduce computational costs of calculating arm weights for GD, and keep inference time relatively stable across horizon $T$, since we are applying a fixed number of samples for adaptation; (2) reduce GD iterations needed, as starting from $\theta_{0}$ requires more GD iterations for model convergence. In the "Global Response" PDF file: (1) Experiments on warm-start vs. restarting from $\theta_{0}$ (Figure 1) *given the same number of training (adaptation) samples*. We see the warm-start process can improve performances. (2) Experiments based on increased number of arms $K$ ($K=50, 100$, Figure 2). We see that the warm-start strategy can lead to good performances while helping the inference time stay relatively stable. *[Finn et al.] Model-agnostic meta-learning for fast adaptation of deep networks.* **Q2: Additional experiments: inference speed with different model sizes? $T=15000$? Corruption-free experiments? Stochastic magnitude of corruption $C$?** Due to page limit, please kindly see our "Global Response" and attached PDF file for these experiments. - *Experiment 2* (Figure 1): Experiments with $T=15000$. - *Experiment 4* (Figure 3): Model size comparison, and line graph for inference speed. - *Experiment 5* (Figure 4): Corruption-free experiments. - *Experiment 6* (Figure 4): Stochastic magnitude of corruption experiment (increasing corruption probability). **Q3: Neural network over-parameterization?** Analogous to existing works (e.g. [76,74,7,8]), we utilize a two-layer fully-connected (FC) neural network ($L=2$) with hidden dimension $m=200$ for experiments, as in Appendix Subsection A.1. In terms of over-parameterization, we admit that for basically all the neural bandit works with experiments (e.g. [74,43,21,6,7,8]), there exists a gap between experiments and theoretical analysis. On one hand, along with more levels $L$ and larger hidden dimension $m$, neural networks will become increasingly difficult to train, more time consuming for inference, and more demanding for computational resources. In this case, to make neural bandits practical for real applications, neural bandit works generally use an ordinary-size neural network for experiments. It is also shown that with ordinary-size neural networks, neural bandit algorithms can already achieve significant performance gains over linear and kernel methods [76,74,7,8,60]. On the other hand, from the theoretical perspective, neural networks need to be over-parameterized with $m\geq \mathcal{O}(Poly(T))$, so that they can approximate an arbitrary reward mapping function $h(\cdot)$. Meanwhile, with over-parameterization, the difference between NTK-based regression models and neural networks will be sufficiently small for regret analysis, which makes it necessary for neural bandit papers. Therefore, we adopt a two-layer FC network for experiments, while performing analysis under over-parameterization settings, as in most existing neural bandit works (e.g., [76,74,7,8,60]). We have also include above discussions in the manuscript as reference to readers. **Q4: Can be fine-tune $\alpha$ to control $\beta$ value in the regret bound?** Based on line 184 in the manuscript, for each arm $x\_{i, t}$, we can represent its arm weight as $w\_{i, t}^{(\tau)} = \min\\{ 1,\alpha\cdot \textsf{frac}\_{\tau}(x\_{i, t}; \mathcal{X}\_{t}, \bar{\Sigma}\_{t-1}) \\}, \tau\in [t-1]$, while the minimum fraction value $\beta$ is defined as $\beta = \min\_{ t\in [T], \tau\in [t-1]} [ \min\\{ \textsf{frac}\_{\tau}(x\_{t}; \mathcal{X}\_{t}, \bar{\Sigma}\_{t-1}), \textsf{frac}\_{\tau}(\tilde{x}\_{t}; \mathcal{X}\_{t}, \bar{\Sigma}\_{t-1}) \\} ]$ (line 301). In this case, tuning parameter $\alpha$ will not directly alter the $\beta$ value, since $\beta$ only explicitly depends on the fraction term $\textsf{frac}(\cdot)$ which is determined by candidate arms. On the other hand, different $\alpha$ can correspond to distinct $\kappa^{2}$ values in Theorem 5.6, which is the round-wise minimum arm weight. This will consequently affect the regret bound, as in line 307 of Theorem 5.6. **Q5: Comments for Figure 1?** We sincerely appreciate your comments on improving the paper presentation, and we have integrated these comments into the manuscript. **Q6: Questions about implementation?** - "How arms are corrupted?": Please see "Global Response" Q3. - "Context corruption?": Please see "Global Response" Q4. - "How to choose $\nu$?": Please see "Global Response" Q5. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal. I appreciate your explanation and additional results. I keep my original rating for this paper. I hope the authors can add a detailed discussion regarding computational efficiency in the final version. Additionally, I hope the authors can add more detail to the gap between choosing the exploration rate in the theorem and the practical algorithm in Q5. I understand this is a gap in the literature on neural bandits in general, but I believe it is worth discussing to help the community understand the gap between theory and experiments in this direction. Good luck! --- Rebuttal 2: Title: Thank you for the feedback! Comment: Dear Reviewer MEPN, We would like to sincerely thank you again for your valuable and insightful comments on improving our paper. We will definitely integrate these detailed discussions regarding computational efficiency, parameter selection, and theoretical analysis gap to our final manuscript. Thank you again for your invaluable review and feedback. Best regards, Authors
Summary: The paper studied neural contextual bandits under adversarial reward corruptions. The adversary is allowed to perturb the reward after observing the action selected by the bandit player, but is subject to the constraint that the total reward corruption must be bounded by some budget C. Within this attack framework, the paper proposed a robust neural contextual bandit algorithm called R-NeuralUCB that formulates a novel context-aware gradient descent process, by taking the uncertainty information of both candidate arms and training samples into consideration. When the neural network is over-parametrized, the paper provided theoretical study on the regret using the NTK technique. The authors proved sublinear regret that scales linearly as the total corruption level C grows. Empirical experiments demonstrated the effectiveness of the proposed robust bandit algorithm. Strengths: The paper studied a popular topic of robust bandit under corruption. Furthermore, different from existing works, this is the first paper that investigates the robustness of "neural" contextual bandit as far as I know, which significantly pushes the frontier of this area. The authors provided solid theoretical analysis for the proposed R-NeuralUCB algorithm using the NTK technique, which is a great use case of NTK in the bandit domain. The results are interesting and appealing. The authors further demonstrated the effectiveness and efficiency of the R-NeuralUCB algorithm on real-world dataset. The results are convincing and validated the theoretical findings discovered in the paper. Weaknesses: The idea to make the UCB algorithm robust is not novel. Prior works have applied similar idea that performs more exploration on arms with lower confidence. This work is mostly relying on the same idea to achieve robustness. Also, the final regret bound is not surprising to me. It is common that the regret scales linearly as the total corruption level C grows. That said, I was wondering if the authors have thought about providing a lower bound on how the regret scales as C grows? Is the upper bound tight? Technical Quality: 4 Clarity: 4 Questions for Authors: I was wondering if the authors have thought about providing a lower bound on how the regret scales as C grows. Is the upper bound tight? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for your valuable questions and comments. We will try our best to provide our detailed response in the form of Q\&A. Please also see our manuscript for cited papers. *Thank you!* **Q1: The regret lower bound in terms of corruption level $C$ under neural bandit settings?** We appreciate the reviewer for your question on the regret lower bound. Recall that in our Conclusion section as well as Appendix Subsection B.7, we mention that one limitation of this paper is lacking a theoretical regret lower bound, and we consider this task as a promising future direction of our work. This is because deriving the lower bound under neural bandits with adversarial corruption settings is significantly non-trivial, and it can lead to a different line of research work by proving these results themselves. Here, the non-linear bandit settings tend to pose extra challenges compared with linear bandit settings. For instance, for kernelized bandit works, their lower bounds will depend on kernel characteristics, such as $\Omega(C(\log(T))^{d/2})$ for the SE kernel and $\Omega(C^{\frac{v}{d+v}}T^{\frac{v}{d+v}})$ for the $v$-Mat\'ern kernel [63,12]. We see that the order of corruption level $C$ and whether the lower bound is related to non-logarithmic $T$, will both depend on kernel properties. As few restrictions are imposed for reward mapping $h(\cdot)$ for neural bandits, we will need to use regression models based on Neural Tangent Kernel (NTK) to perform theoretical analysis. In this case, without a well-established existing knowledge base for NTK (e.g., number of functions $M$ needed for the functional separateness condition [63,15] for NTK), it will require significant efforts to obtain such a lower bound for neural bandit cases, and can lead to a different line of research work by proving these results. In this case, we consider deriving such a theoretic regret lower bound as one promising and challenging future direction of this paper.
Rebuttal 1: Rebuttal: We would like to sincerely thank reviewers for your valuable questions and comments. *Please refer to attached PDF for added experiments.* Given the page limit of 6000 characters, we will try our best to provide our detailed response. Please also see our manuscript for cited papers. *Thank you!* **Q1: Overall discussion on computational efficiency?** In practice, for our experiments, we can adopt the warm-start GD process (details and pseudo-code are in Appendix Subsection B.6) to reduce the computational cost, in terms of deriving the arm-specific network parameters (line 9, Algorithm 1). In round $t\in \\{3, \dots, T\\}$, for each candidate arm $x_{i, t}, i\in [K]$, we can acquire its arm-specific parameters $\theta_{i, t-1}$ by fine-tuning existing trained parameters $\theta_{t-2}$ with a small number of training samples (received arm-reward pairs), instead of obtaining $\theta_{i, t-1}$ by training from scratch, i.e., starting from randomly initialized $\theta_{0}$ with a large number of samples. Similar ideas on mini-batch warm-start training are also applied by other bandit-related works (e.g., [9]). Meanwhile, it is also a common practice for neural bandit works to adopt warm-start training. On one hand, neural bandit works generally formulate their GD training process by starting from randomly initialized $\theta_{0}$ (e.g, [76,74,7,8,60]), in compliance with theoretical analysis requirements. On the other hand, however, for their experiments and source code, it is common to train model parameters incrementally, by updating trained parameters from previous rounds, since it is more practical and computationally efficient for real applications. In this case, for our experiments, in order to strike a balance between computational costs and model performances: - (1) Inspired by meta-learning ideas [Finn et al.], we utilize warm-start GD by adapting previously trained network parameters $\theta_{t-2}$ for each candidate arm with a small number of training samples, instead of training from $\theta_{0}$ with a large number of samples. Details are in Appendix Subsection B.6. - (2) Based on our formulation of the warm-start GD process (Algorithm 2, Subsection B.6), we will sample a fixed number of mini-batch training samples (i.e., received arm-reward pairs) for each candidate arm to calculate arm weights and perform GD. This is inspired by the idea of mini-batch warm-start GD training [9]. In this case, a fixed number of training samples can help the round-wise inference time stay relatively stable, without growing drastically along with $T$. *[Finn et al.] Model-agnostic meta-learning for fast adaptation of deep networks.* **Q2: Complementary experiment results?** - **Experiment 1** (Figure 1): Warm-start vs. restarting from $\theta_{0}$ for each round, *given the same number of training (adaptation) samples*. We see the warm-start process can improve performances, as it can leverage previous trained parameters while restarting from $\theta_{0}$ can be hard to converge. - **Experiment 2** (Figure 1): Here, we use $T=10000$ for both recommendation data sets (MovieLens, Amazon) and MNIST to maintain consistency, similar to existing neural bandits for recommendation works, which apply $T=10000$ for their recommendation data sets (e.g., [60,7]). Here, we include experiments with $T=15000$, where our proposed methods can maintain good performances. - **Experiment 3** (Figure 2): Experimental results with increased number of arms $K$. We see that the warm-start strategy can lead to good performances while helping the inference time stay relatively stable. - **Experiment 4** (Figure 3): Number of parameters and inference time, given different hidden dimensions. We can see that the inference time stays relatively stable, due to a fixed number of adaptation samples for warm-start GD. - **Experiment 5** (Figure 4): MNIST data set under corruption-free setting, where our proposed methods can maintain competitive performances compared with baselines. - **Experiment 6** (Figure 4): MovieLens data set with increasing corruption probability, and the probability of corrupting a chosen arm is $p_{t}$, where $p_{t}$ starts from $p_{0} = 20\%$ and increases by $\Delta p = 0.2\%$ for every 100 rounds. NeuralUCB-WGD and R-NeuralUCB can still enjoy an advantage over baselines. **Q3: How arms are corrupted for our experiments?** As in the caption of Figure 1: (1) For MovieLens and Amazon, the adversary will randomly decide whether to corrupt the reward of each chosen arm with a probability of 20\% or 50\%; (2) For MNIST data set, we consider a fixed budget $C=2000$ or $C=4000$, and randomly sample 2000 or 4000 rounds across $T=10000$ as corrupted. **Q4: Context corruption?** Recall that we use NTK-based regression to bridge reward mapping function $h(\cdot)$ and over-parameterized neural network $f(\cdot)$, where we quantify impacts of reward corruptions by decomposing NTK-based regression parameters as in Appendix Subsection C.4. However, this approach will fail under arm context corruption settings. With gradient-based mapping being $g(\cdot)$, it can be difficult to measure the difference between projected context with corruption $g(x+\Delta x)$ and the original projection $g(x)$. In this case, we may need additional assumptions, such as Lipschitz-smoothness assumption for the neural network [Du et al.] or Lipschitz reward mapping function [42]. We propose to explore this direction in our future works. *[Du et al.] Gradient descent finds global minima of deep neural networks.* **Q5: How to choose $\nu$?** Similar to existing stochastic neural bandit works (e.g. [76,74,7]), when the random noise variance proxy $\nu$ is unknown to the learner, we consider it as a tunable parameter controlling exploration intensity, and use grid search to find a good $\nu$. Please also find related parameter study in Appendix Subsection A.2. Pdf: /pdf/0188d2393d2039f6a54ac0e9add94fb18f5f11e3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms
Accept (poster)
Summary: This paper proposes a method to upper-bound the generalization gap $G(w)$ (discrepancy between the empirical risk and the theoretical risk for a given parameter $w$ of our model) by tractable _topological_ quantities. Namely, given a set $W = \{w_\tau,\dots,w_T\}$ (typically, a sequence of iterates for a SGD between iterations $\tau$ and $T$, where $w_\tau$ is already near a local minima of the empirical risk), the authors derive bounds of the form $$\sup_{w \in W} G(w) \leq \sqrt{\frac{\text{Topological quantity } + \text{(Information theory quantity)} + \log(\zeta^{-1})}{n}}$$ with probability $1 - \zeta$. The work focuses on building actual topological quantities that make this claim true, proposing two possible candidates: - The _$\alpha$-weighted lifetime sums_ that (roughly) computes the maximum spanning tree of $W$ and then assign a weight to it, - The _positive magnitude_ that (roughly) computes the positive mass of $\beta = K_s^{-1} \mathbb{1}$, where $K_s$ can be understood as a Gaussian/Laplace kernel on $W$ for some distance $\rho$ and bandwith $s$ (and $\mathbb{1}$ is a vector full of $1$s). Contrary to previous similar works, their theoretical results directly hold in (mostly) practical situations: they do not require observing a time-continuous trajectory and the quantities involved are (at least theoretically) directly computable (in practice, this requires approximation). Experimentally, the authors observe through an extensive set of experiments that the upper-bound they propose performs better, in terms of correlation with the actual generalization gap, than the other kind of topological upper-bound previously introduced. Strengths: The work proposes an original and promising approach to tackle an important question. Contrary to most previous works, the experiments go way beyond "toy-models", and the results are reported on modern architectures like transformers, showing that the method is usable in practice on "real" setups. Experimental results are extensively reported, supporting the reproducibility of the work. The presentation of related works / comparison of SotA (with which I am not completely familiar) seems to be quite comprehensive. Weaknesses: In my opinion, the main weakness is that the (main) paper is kinda far from being self-contained and it is somewhat hard to get an intuitive grasp on the proposed approach. The appendix (about 40 pages!) contains crucial information (proofs, missing definitions, etc.) that couldn't be carefully investigated (*). While the contribution itself is seemingly solid, this prevents me from claiming that the work is theoretically sounded and limit the understanding if one stick to reading the main paper. For instance, if one looks at Theorem 3.4, I cannot understand the role of $\tau$ and $T$ (as far as I can tell, they only appear in $I_\infty$ in the rhs, but since this term is not defined I do not know what happend if, say, $T = \ŧau$ or $T = +\infty$). Similarly, without reading the proofs, I have no clue on why taking $\alpha = 1$ should be natural, what is the role played by the constant $q$ in the $(q,L,\rho)$-Lipschitz assumption, etc. I understand that being comprehensive is impossible given the space limitation, but I believe that it is part of the contribution to produce an accessible less-than-10-pages-long presentation of the work. (*) Note that I quickly parsed the supplementary material, which I found quite enlightening, but cannot guarantee the correctness of its content. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Could you give (for, say, theorem 3.4) a sort of "sketch of proof" and more intuition on "why this should be true"? 2. Related to above, in Thm 3.4 (and 3.5), the r.h.s. depends on many parameters on which the l.h.s. doesn't. For instance, it seems that I could pick a different $\rho$ or $\alpha$ without affecting the l.h.s. Is that correct? Could I just try to find a $\rho$ that makes $E_\alpha^\rho$ close to $1$ in order to improve my bound? 3. Still in Thm 3.4, would it be in some sense possible that we pick $\alpha$ below the PH-dim of (roughly speaking) the possible support of the random set $W$, in which case (here again, this is informal) it may be possible that $E_\alpha$ is unbounded? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Limitations have been discussed by the authors, which is appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time spent on the paper and their insightful comments. We address below the main questions (alongside weaknesses). **Could you give (for, say, theorem 3.4) a sort of "sketch of proof" and more intuition on "why this should be true"?** First, we would like to point our that even though the appendix is long, most of it is dedicated to additional empirical results and figures, while the proofs are a relatively short portion of the appendix. Including a sketch of proof is a very good suggestion from the reviewer. In the next version of the paper, we now include a sketch of proof of Theorem 3.4. The general idea is the following: we build on Corollaries 26 and 27 of the reference [18] of our paper, to get a bound of the worst-case generalization error as (this is an informal statement, in particular we omit absolute constants and less relevant terms, refer to the paper for more details): $$ G \lesssim \sqrt{\frac{N_\delta}{n}} + \sqrt{\frac{\mathrm{IT}}{n}}, $$ where $N_\delta$ is a covering number of the trajectory and $\mathrm{IT}$ is an information-theoretic term. First, we use lemma 16 of [11] to show that $\mathrm{IT}\leq I_\infty$, where $I_\infty$ is the mutual information term appearing in Theorem 3.4. Second, we bound $N_\delta$ in terms of packing numbers, denoted $P_\delta$ (see Definition A.15), which we further bound in terms of $E_1$ (or in general $E_\alpha$ for $\alpha \leq 1$) using a geometric construction displayed on Figure 2.b (rebuttal pdf). We have now included this figure to the paper to provide more intuition on the proof technique, which is based on this geometric observation. For theorem 3.5, we can mention that the key difference is Lemma B.13, whose proof is short but contains the essential ideas behind this theorem. It relies on properties of magnitude that are detailed in the appendix. **Related to above, in Thm 3.4 (and 3.5), the r.h.s. depends on many parameters on which the l.h.s. doesn't. For instance, it seems that I could pick a different $\rho$ or $\alpha$ without affecting the l.h.s. Is that correct? Could I just try to find a $\rho$ that makes $E_\alpha^\rho$ close to $1$ in order to improve my bound?** It is true that some of our bounds depend on free parameters. As noted by the reviewer, this is in particular true for the constant $\alpha \in [0,1]$ and the pseudometric $\rho$, as soon as $\rho$ satisfies the regularity condition given by Definition 3.1. Let us discuss these two quantities separately. > Choice of $\alpha$ First, regarding the choice of $\alpha$, the only pertinent choice in general seems to be $\alpha=1$, which is what we choose in our experiments. The reason why this choice is natural is that the $\alpha$-weighted lifetime sums tend to be higher when $\alpha$ gets smaller. In the case where the random set is finite, $E_0$ even corresponds to the cardinality of the set and does not capture any meaningful topological behavior (moreover, it goes to infinity when the size of the set goes to infinity). The reviewer may consult [61] for better intuition about the behavior of this quantity. > Choice of $\rho$ and $q$ Second, for the choice of $\rho$, the possibilities might not always be numerous. Indeed, $\rho$ must satisfy a $(q,L,\rho)$ condition as in Definition 3.1. The choice of $\rho$ impacts the bound both by affecting the topological quantity ($E_\alpha^\rho$ in the case of Theorem 3.4) and the value of $q$ and $L$. It should be noted that the pseudometrics $\rho_S^{(q)}$ (with $q\geq 1$) appear in Example 3.2 always satisfy the $(q,1,\rho_S^{(p)})$-Lipschitz condition. In that particular case, it seems that choosing $\rho_S := \rho_S^{(1)}$ is the best choice because of Hölder's inequality: $\forall q \geq 1,~ \rho_S \leq \rho_S^{(q)}$. The fact that $q=1$ is a good choice is particularly apparent in the proof of Theorem 3.5, for this exact reason. When more pseudometrics are admissible, the reviewer is right to notice that there could be an optimal choice giving the best bound. In the final version of the paper, we discuss in more detail the choice of these parameters. > I cannot understand the role of $\tau$ and $T$ The meaning of these parameters is that we collect the training trajectory between iterations $\tau$ and $T$. While it is technically possible to chose $\tau=T$, it means that the trajectory would be reduced to a single point and therefore it does not contain any interesting topological information. Similarly, taking $T\to\infty$ seems to be possible under mild assumptions, even though it is not our focus. Note that our topological complexities also depends implicitly on $\tau$ and $T$, as they are evaluated on the trajectory between iterations $\tau$ and $T$. **Still in Thm 3.4, would it be in some sense possible that we pick $\alpha$ below the PH-dim of (roughly speaking) the possible support of the random set $W$, in which case (here again, this is informal) it may be possible that $E_\alpha$ is unbounded?** This is a pertinent question from the reviewer. In the case that is of more interest for our experiments, the random set $W$ is finite, which makes the $\alpha$-weighted lifetime sums bounded for any value of $\alpha$. That being said, if one extends our theory to infinite sets (which is technically possible for most aspects, even though some statements should be reformulated), the value of $\alpha$ should indeed be carefully chosen with respect to the dimension of the set. Even in our experimental setting, if the size of the set goes to infinity, it might be pertinent to take into account the dimension of the "limiting set" to tune the value of $\alpha$, keeping in mind that it must satisfy $\alpha \in [0,1]$. However, this setting is largely theoretical as the sets are large but finite in practice. In the final version of the paper, we added a discussion about this interesting question from the reviewer. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for taking time answering my review. As i said, my main concern with the work is that it contains a lot of technical content deferred to the appendix that I could not review in detail. Otherwise I'm fine with it; so I must discuss with point with the AC and other reviewers in the next discussion period. --- Reply to Comment 1.1.1: Comment: We understand that the reviewer finds the appendix to be important for the grasp of our paper. So, we would like to clarify its content. First, let us highlight that the appendix is 26 pages long, and not 40 as claimed by the reviewer. 1) Having proofs in the appendix (Appendix B) is standard for NeurIPS papers and helps readability of our paper. The grasp of the proofs are not needed for the grasp of the ideas in our paper. 2) Our exposition of the persistent homology (Appendix A) should not be seen as a set of missing definitions but rather as a little treatise of the background material included for making our paper self-contained. We could have cited the relevant resources only, but we instead opted for including the relevant content from those references as a small section. We hope that such non-crucial sections can help the uninformed reader for understanding our paper. 3) The remaining part of the appendix (Appendix C) is dedicated to experimental setting details and additional empirical results. These experiments mainly support the empirical findings reported in the main paper, and provide detailed analyses of the content reported in the main paper. In the final version of the paper, we will use the extra page to include, in the main paper, a summary of the main ideas contained in the appendix, in particular a sketch of proof and comments on the additional experiments. We hope that the reviewer reconsiders their evaluation in the light of these.
Summary: Prior work has sought to provably bound the generalization of a neural network based on a complexity measures, eg using some form of evaluation of mutual information between the data and the training path, however such proofs have relied on the topologies from the asymptotic infinite training case and other impractical assumptions. This paper seeks to extend the provable generalization bounds to a practical training regime with discrete time steps. They attempt to make such complexity measures more tractable by, rather than a full derivation of those intrinsic dimension measures utilized in prior works, looking to leverage instead those dependent underlying variables on which intrinsic dimension is based, which are more accesible during training, resulting the the measures of "alpha weighted lifetime sums" and "magnitude". In addition to the forms of experimental validation comparing the complexity measures to a resulting generalization gap, the scope of the paper includes extensive theoretical discussions that are slightly beyond the competence of this reviewer to fully evaluate, and so this review should be considered as taking much of these aspects at "face value". That being said, relying on assumption of rigor in such aspects, I believe the scope of the paper and implications of the claims would likely merit some form of recognition from the conference, like an oral presentation or spotlight. Strengths: - Originality I have seen some manner of complexity measures as a form of performance metric discussed in prior work, including things like sharpness measures, however I have yet to see anyone claim bounded forms of generalization guarantees accessible during intermediate stages of training, suggesting that this could be a significant improvement towards such applications. - Quality The paper was ambitious in scope and for the most part appeared dense in significance. - Clarity Some of the benchmarking was more intuitive as the complexity compared to generalization charts as opposed to the Table 1 for instance, I don't know if that could be simplified in some fashion (sometimes less information is better and save the half page of numbers to the appendix for instance). - Significance Taken at face value the availability of a provable form of generalization guarantee efficiently available in real time during training is potentially quite significant towards mainstream practice. Weaknesses: It is hard be this reviewer to fully assess the theoretical merit of the derivations, and hope that some of the other reviewers may be more competent in this sense. Technical Quality: 3 Clarity: 2 Questions for Authors: Can you further clarify your use of the term "connected components"? Should we interpret that as the number or ratio of weights impacted by a gradient update step? Am I interpretting correctly that with increased connectivity we could expect improved generatlization or is that too simplistic? (After all for the neural tangent kernel at infinite width wouldn't all weights be updated with each gradient step?) Do you expect that the generalization bounds are asymptotic towards scale of foundation models or can such assumptions be comparably extended to models of limited scope and scale? Can you clarify how training may be conducted in this form, eg for a simple supervised learning setup, am I correct in assuming that it would it be possible to replace the performance metric but the labels corresponding to training data still be required? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: It would be hard to fully validate the claims of "provable generalization bound" without some more significant survey of the appendices, which are outside the competence of this reviewer. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and their insightful comments on our work. > Some of the benchmarking was more intuitive as the complexity compared to generalization charts as opposed to the Table 1 for instance, I don't know if that could be simplified in some fashion We believe that Table 1 is quite informative (though compact) as it summarizes a lot of our empirical findings. In order to provide more visual representations of our results, we provided in Fig. 4.c a graphical representation of the performance of our topological complexities in the plane $(\psi_{\mathrm{bs}},\psi_{\mathrm{lr}})$. This figure contains part of the information of Table 1 along with additional pair (model, dataset). Fig. 1.c is also a graphical representations of the values of $\Psi$ reported in the table. These two figures therefore contain most of the information from the table. In the next version of the paper, we have completed them by additional plots similar to Fig. 1.c (see Fig. 2.a in the rebuttal pdf for an example). Note moreover that a wide range of plots in the appendix represent the topological complexities against the generalization error. > It is hard be this reviewer to fully assess the theoretical merit of the derivations In order to improve the clarity of the paper, we now added to the main text (thanks to the additional page) a sketch of proof for our main results in the next version of the paper. > Can you further clarify your use of the term "connected components"? Indeed, our use of the term "connected components" in Section 2 might not have been clear enough. It should be understood as follows: given a distance parameter $\delta>0$ and a point cloud $W$ (which could be one of our training trajectories for instance), we can construct a graph by connecting each pair of points at distance at most $\delta$ and count the number of connected components of the obtained graph. What we meant by "tracking the “connected components” of a finite set at different scales" consists in observing the evolution of these connected components when varying $\delta$. While this is just an informal introduction to the concept of persistent homology, we provide in Appendix A a more formalized description. In the next version of the paper, we are now more precise about it. > Should we interpret that as the number or ratio of weights impacted by a gradient update step? Am I interpretting correctly that with increased connectivity we could expect improved generatlization or is that too simplistic? The reviewer is right in noticing that if only a ratio of weights are affected by the gradient steps, then the trajectory will be lower dimensional, which might make our topological complexities smaller. However, this is only a particular case and a too simple view of what is happening during training. The topological properties of the trajectory are the consequence of a complex iterative algorithm (see for instance reference [11] in the paper), and capture information that is more complicated than the ratio of updated weights. The emergence of connected components is considered in a complex recursive way, it is not related to the ratio of weights that are updated. Finally, the NTK is a specific limit with a particular scaling scheme, we do not believe that it can be compared to the observed behavior in our paper. > Do you expect that the generalization bounds are asymptotic towards scale of foundation models or can such assumptions be comparably extended to models of limited scope and scale? While our work was the first to compute topological generalization measures on neural architecture with real practical interest, applying the same procedures on large language models or other foundation models would still be far too computationally expensive. However, one could imagine a setup where neural networks are embedded in a lower dimensional representation and compute the topological quantities associated with the trajectories in this lower dimensional space. While additional theory would be necessary to understand the true impact of this compression procedure, this might open the door to the computation of topological measures for foundation models. This could be a direction for future research. > Can you clarify how training may be conducted in this form, eg for a simple supervised learning setup, am I correct in assuming that it would it be possible to replace the performance metric but the labels corresponding to training data still be required? Regardless of the training procedure, the dataset, and the model, we need two things to compute our topological complexities: a training trajectory and a pseudometric satisfying the condition of Definition 3.1. If the metric is chosen carefully, it could theoretically be applied to any large model as soon as suitable computational resources are available. An interesting setting happens when a large model (eg, a foundation model) is finetuned on a dataset different from the dataset used for pretraining. In that case, to compute our topological complexities, one would only need the dataset used for the fine-tuning procedure and the training data for the inital large model would not be needed. --- Rebuttal Comment 1.1: Comment: Thank you for response. I retain my recommendation for accept pending any discussions with other reviewers.
Summary: - The authors provided a novel topological-complexity-based uniform generalization error bound, constructed on the $\alpha$-weighted lifetime sums or positive magnitude. This bound shows better correlation with the generalization error compared to existing bounds. - The authors proposed an implementation scheme based on dissimilarity measures between neural networks, enabling the quantification of generalization across different model architectures without the need for domain or problem-specific analysis. - The authors confirmed that their topological complexity term exhibits better correlation with the actual generalization gap in large-scale neural networks, such as ViT and GNN. Strengths: - More Realistic Generalization Error Bound - This paper addresses a significant limitation of previous topological complexity-based generalization error bounds, which only hold under discrete parameter updates based on SGD or Adam, or when the number of iterations is taken to infinity. By overcoming this limitation, the bound provided in this paper offers a more practical and realistic understanding of generalization performance. - Additionally, the paper presents a method to compute this complexity for large-scale neural networks such as ViTs and GNNs, enhancing its practical applicability. - Innovative Combination of Two Interesting Topological Metrics for Generalization Error Bound - The authors leverage the fact that the $\alpha$-weighted lifetime sums based on minimum spanning trees, a quantity from persistent homology, are related to a pseudo metric, and apply this to generalization error analysis. - They also focus on a topological quantity called Magnitude and its connection to metric spaces, successfully providing another generalization error bound. - These achievements are the result of effectively combining techniques from topological data analysis, which goes beyond mere application and demonstrates significant originality. - Correlation with Generalization Performance Confirmed through Numerical Experiments on Large-Scale Neural Networks - The proposed topological complexity is computationally feasible even for large-scale neural networks. This is crucial for understanding the generalization performance of models like LLMs in the future. Weaknesses: - Concerns Regarding Assumptions - This bound only holds under bounded losses. Thus, as the upper bound B increases, the bound becomes vacuous and diverges under unbounded losses. - Additionally, the relationship between the upper bound B of the loss and the topological complexity is unclear, making the overall interpretation of the bound difficult. For instance, under large B, the correlation of the topological complexity might be negated, resulting in a bound that does not correlate well overall. Providing an intuitive discussion on this aspect would help clarify the significance of the bound. - I am not well-versed in the assumptions based on pseudo metrics, so I cannot assess the validity of the (1,L,\rho)-Lipschitz continuity assumption. However, as the authors mentioned, this assumption limits the applicability of the bound to certain pseudo metrics like the Euclidean distance or data-dependent pseudo metrics. - Concerns Regarding Mutual Information in the Bound - Although the authors reference literature to assert that the mutual information in their proposed bound is tighter than the information-theoretic quantities in traditional bounds, they do not provide concrete methods for calculating or estimating this quantity. - This means that while some terms in the proposed generalization error bound are computable, the bound as a whole is not computationally evaluable. Thus, although the topological quantities used in the bound correlate with generalization performance, this does not guarantee the overall tightness of the bound. In practice, even if the topological complexity decreases, an increase in mutual information could render the bound vacuous. - The paper claims that the proposed bound is a uniform generalization bound. Therefore, it is necessary to discuss whether this bound achieves uniform convergence. If the mutual information term can be bounded by a constant, uniform convergence might be guaranteed, but this is not trivial and depends on the behavior of this quantity. Otherwise, the bound might fail to ensure uniform convergence. - Challenges in Handling Hyperparameters and Lack of Detailed Discussion - If I understand correctly, the evaluation of the bound and topological complexity is closely related to the iteration step \tau at which the measurement begins and the scale value s in PMag, both of which need to be determined for the bound to hold. These settings carry the risk of arbitrariness in evaluating generalization error. However, the paper does not provide sufficient discussion on the correlation between these changes and generalization performance (incidentally, there is a notation conflict between Kendall’s coefficients and \tau here). For instance, providing more candidates for s and analyzing the sensitivity of the correlation degree to changes in its value, or verifying the variation in experimental results under multiple starting point candidates \tau, would allow for a minimum level of discussion. - Lack of Comparison with Other Information-Theoretic Quantities or Metrics Strongly Correlated with Generalization - The numerical experiments presented in this study focus solely on topological complexity and compare the contributions of this research with existing studies in terms of correlation with the generalization gap. As a result, the relationship between topological quantities and generalization, and their standing in the context of other discussions, such as generalization based on, e.g., gradient variance (e.g., Jiang et al. (2019)) or mutual information (e.g., Russo and Zou (2016); Harutyunyan et al. (2021); Steinke et al. (2020)), remains unclear. - Thus, it remains uncertain how the proposed measure compares to other evaluation metrics in terms of superiority. Citation: - Jiang et al. (2019): Jiang et al. Fantastic Generalization Measures and Where to Find Them. ICLR2020. https://openreview.net/forum?id=SJgIPJBFvH - Russo and Zou (2016): D. Russo and J. Zou. Controlling bias in adaptive data analysis using information theory. AISTATS2016. https://proceedings.mlr.press/v51/russo16.html - Harutyunyan et al. (2021): Harutyunyan et al. Information-theoretic generalization bounds for black-box learning algorithms. NeurIPS2021. https://arxiv.org/abs/2110.01584 - Steinke et al. (2020): Steinke et al. Reasoning About Generalization via Conditional Mutual Information. COLT2020. https://arxiv.org/abs/2001.09122 Technical Quality: 3 Clarity: 3 Questions for Authors: The following questions primarily stem from the Weaknesses section. Please review these alongside the Weaknesses section and provide your comments. If the concerns and questions raised in the Weaknesses and Questions sections are appropriately addressed, I will consider increasing the score. - **Discussion on the Validity of Assumptions**: Could you provide a more detailed discussion on the impact of the upper bound $B$ of the bounded loss and how it affects the bound value? Specifically, for cases like the bound in Theorem 3.4, where the topological complexity is multiplied by $B$, it would be helpful to understand to what $B$ could potentially negate the correlation with the topological distance, either theoretically or numerically. Additionally, could you elaborate on the validity of the $(1,L,\rho)$-Lipschitz continuity assumption? - **Mutual Information in the Bound**: Please provide a more detailed discussion on the mutual information term appearing in the bound. Can this quantity be upper-bounded by a constant, thus ensuring uniform convergence? Or could this potentially be unbounded and thus become a fundamental limitation? - **Sensitivity to Hyperparameter Settings ($s$ and $\tau$)**: How do the evaluation metrics and experimental results vary with the hyperparameter settings, such as $s$ and $\tau$? If feasible within the rebuttal period, it would be beneficial to discuss this sensitivity. If there is a valid reason for not addressing this, please explain. - **Comparison with Gradient Variance and Other Mutual Information Terms**: Does the topological complexity in the proposed bound achieve better correlation with generalization compared to gradient variance and mutual information terms, which are known to correlate well with generalization performance? If the derived topological complexity demonstrates a stronger correlation, it could significantly contribute to future research directions. Providing theoretical and experimental evidence would be beneficial. If there is a valid reason for not making this comparison, please explain. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - This paper is a theoretical study aimed at providing a more realistic evaluation of generalization error. The data used in the experiments, such as MNIST, are open-source, which indicates that potential negative social impacts are appropriately controlled. - While limitations are discussed in Section 6, there appear to be additional potential limitations as mentioned in the questions above. Addressing these would provide a more comprehensive understanding of the research's boundaries and enhance the validity of the study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review. Below we address all the raised concerns. We hope that the reviewer reconsiders their score in the light of our comments. Before delving further, we would like to point out that our main focus is to obtain not just theoretical insights but also empirically meaningful quantities. We stress the importance of the latter, the extensive suite of evaluations proving that our topological quantities are meaningful and exhibit strong correlations. Our paper presents the first topological measures with an extensive set of experiments, obtained on practical models and complex data domains. We have now included additional discussion on all the points listed below. **Validity of the assumptions.** > Bounded loss Let us emphasize that the bounded loss assumption is realistic in several practical settings (eg, for the $0-1$ loss) and that it has been largely used in the literature on topological bounds [17,18,29]. In Thm. 3.4, the scaling of $B$ will not affect the correlation. Indeed, if we multiply the loss by $c>0$, ie, $\ell':=c\ell$, and consider the data-dependent pseudometric $\rho_S$, then $E_1$ is also multiplied by $c$, hence the observed correlation is independent of the scaling factor. We now expand on these details in the paper. > Lipschitz assumption \& applicability First the $(1,1,\rho_S)$-Lipschitz condition is always satisfied for the data-dependent pseudometric $\rho_S$. Indeed, by definition, $\Vert L_S(w) - L_S(w') \Vert = n \rho_S(w,w')$. The $(1,L,\rho)$-Lipschitz condition becomes non-trivial when $\rho$ is the Euclidean distance. In that case, it requires $\ell(w,z)$ to be Lipschitz, which is relatively standard. In our paper, we encapsulated both scenarios inside a single condition to identify the more general structure and pave the way for the use of other distances between models. Thus, our Lipschitz condition should not be seen as a limitation but rather a generalization. **Mutual Information.** The MI term measures the dependence between the data $S$ and a random set $\mathcal{W}$. Its presence is inherited from prior topology-based generalization studies, which face the same issues [17,18,8,11,69,29]. These bounds can be decomposed as the sum of a topological and an MI part, yet, in prior works the topological part suffered from major drawbacks, which is the main focus of our paper. We aim at improving the topological part of the bound. Obtaining a clearer understanding of the remaining part is an important direction for our future research. > No method to estimate the MI term We agree with the reviewer that the very complex structure of the MI term raises several difficulties. (1) we are not aware of existing techniques to evaluate MI between random sets and (2) the dimensionality of $\mathcal{W}$ (billions of parameters), could make a direct computation intractable. While these render the MI term uncomputable (as we mention in the limitations), our work focuses on the topological part. Our experiments show that these complexities are important and meaningful in addition to being amplified in the first part of the bound, as the dependence is explicit. Nevertheless, as acknowledged by the reviewer, our definition of MI is much simpler than the prior works making it intuitive to understand. > Can this ensure uniform convergence? We thank the reviewer for this very pertinent question. Let us first highlight that our use of the term "uniform" refers to the bound being uniform over the trajectory, i.e., we look at the worst-case error over the trajectory. In this context, our convergence is uniform. As mentioned above, convergence can be ensured with similar information-theoretic terms (which we could use in our work) for particular algorithms, like SGLD [18]. However, in the most general case (eg, a deterministic algorithm), it could lead to vacuous bounds. We acknowledged this lack of understanding in the limitations and we will add further comments. **Sensitivity to hyperparameters.** We thank the reviewer for suggesting this interesting sensitivity experiment. We performed an analysis of the sensitivity to $s$ for the ViT and CIFAR10 in Fig. 1.a (rebuttal pdf). We observe that the correlation is relatively stable near the two values of $s$ used in our study. Note that our choice of $\sqrt{n}$ is theoretically motivated (see lines 216 to 221), and the choice of small $s$ comes from evidence in prior work. We have added these sensitivity experiments in the paper. We introduced the parameter $\tau$ for two reasons: (1) to allow the use of pretrained weights and (2) to capture the geometric properties of the trajectory near a local minimum. Therefore, $\tau$ is chosen to ensure that the training is near convergence. We did not try to tune the parameter $\tau$ in any way. Thank you for catching the conflict of notations, we will replace $\tau$ by $t_0$. **Comparison with other terms.** As suggested by the reviewer, we conducted a small-scale experiment to compare our complexities with the gradient variance [31] in Fig. 1.b (rebuttal pdf). These preliminary results indicate a comparable correlation performance while slightly favoring our method. We have now included this experiment with gradient variance in the final version and will extend it to further settings. On the other hand, the numerical comparison with conditional mutual information (CMI) bounds requires a rigorous re-adaptation of our models and datasets to the CMI analysis, which is not doable within the rebuttal period, but we will consider to try this experiment for the final version. We also point out that our complexities are expected to be much less computationally demanding than CMI. Indeed, one can evaluate them based on a single training and no supersample of data is necessary. Finally, let us reiterate that, as opposed to most existing works, our measures are meant for the worst-case generalization error (see Section 4). --- Rebuttal Comment 1.1: Title: Aknowledgements Comment: First and foremost, I would like to express my gratitude to the authors for their detailed feedback. I also apologize for my misunderstanding regarding the nature of the generalization error bounds presented in this paper, which are worst-case generalization bounds rather than algorithm-dependent bounds. I misunderstood this point in the section discussing mutual information. The authors have effectively addressed my concerns. In addition to this clarification, I find the analytical approach and the insights gained from the bounds to be particularly interesting. I now believe the paper makes a sufficient contribution to warrant acceptance at NeurIPS. Therefore, I have decided to raise my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for considering our feedback and updating their score. We are glad that our responses could address their concerns. We are incorporating these into the revision.
Summary: The paper makes significant contributions by establishing new theoretical connections between generalization and topological complexity measures, specifically $\alpha$-weighted lifetime sums and positive magnitude. The authors introduce these novel measures and link them to generalization error using innovative proof techniques. Experiments show that these measures correlate highly with generalization error across various architectures and datasets. The work offers simpler, less restrictive generalization bounds, removing the need for complex geometric assumptions. These flexible measures are adaptable to different domains, tasks, and architectures, providing practical, theoretically justified tools for understanding and predicting generalization in modern deep neural networks. Strengths: The authors establishes new theoretical connections between generalization and topological complexity measures, specifically focusing on $\alpha$-weighted lifetime sums and positive magnitude. Also, introduces and elaborates on new topological complexity measures, such as the positive magnitude, which is a modified version of the magnitude measure. These measures are linked to generalization error using new proof techniques. The paper respects the discrete-time nature of training trajectories and investigates topological quantities suitable for practical topological data analysis tools, which leads to computationally efficient measures. It proposes generalization bounds that are simpler and less restrictive compared to existing methods, removing the need for complex geometric assumptions and making them more practical. Weaknesses: Maybe I get this wrong but in line 113, is $Y\subset X$ or $Y\subset A$? Technical Quality: 3 Clarity: 3 Questions for Authors: 1.- Have you explore the situation of use compact metric spaces? 2.- Have you explore the situation of change the metric to a non-euclidean one? 3.- In line 182, what do you mean by $\mu_z^{\otimes n}$? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Lack of understanding of IT terms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their interesting comments. > Maybe I get this wrong but in line 113, is $Y\subset X$ or $Y\subset A$? We thank the reviewer for pointing out this minor typo in the definition of the PH dimension. Indeed, we have replaced $Y\subset X$ by $Y\subset A$ in the final version of the paper. > 1.- Have you explore the situation of use compact metric spaces? We thank the reviewer for this interesting question/comment. Since we are interested in discrete optimizers, the most pertinent setting is that of discrete (even finite in practice) trajectories. However, it is interesting to ask whether our theory extends to random compact sets. As we hinted in Remark B.15 in the appendix, Theorem 3.5 (for positive magnitude) could be extended to such a compact setting. On the other hand, the extension of Theorem 3.4, involving $\alpha$-weighted lifetime sums, might be more complicated as the definition of $E_\alpha$ makes an explicit use of the finiteness of the set. However, we believe the main ideas of Theorem 3.4 could still be used to obtain interesting bounds in the compact setting and leave this question for future work. We have now added a further remark about it in the revision. > 2.- Have you explore the situation of change the metric to a non-euclidean one? As we explain in Section 3.1 (about the mathematical setup), our framework encompasses a wide range of metrics and pseudometrics, as long as they satisfy the condition given by Definition 3.1. In particular, an important part of our experiments uses the so-called data-dependent pseudometrics on $\mathbb{R}^d$, which we denote by $\rho_S^{(p)}$ in the paper. These pseudometrics are typically non-Euclidean on $\mathbb{R}^d$. While we use these non-Euclidean metrics, one can see in Example 3.2 that $\rho_S^{(p)}$ results from the Euclidean distance on $\mathbb{R}^n$ applied on the embedding $L_S(w)$ of $w$. It could be interesting to explore other non-Euclidean possibilities, which we leave for future work. For instance, if the loss is supported on a Riemannian manifold, then the associated metric could serve as a non-Euclidean metric to compute the topological complexities. One of the goals of our works is to leave the door open to such possibilities. > In line 182, what do you mean by $\mu_z^{\otimes n}$? First, as defined in the introduction, the notation $\mu_z$ corresponds to the unknown data distribution. We used the notation $\mu_z^{\otimes n}$ as a shortcut for the product measure $\mu_z \otimes \dots \otimes \mu_z$. Therefore, the notation $(z_1,\dots,z_n) \sim \mu_z^{\otimes n}$ means that the data points $z_i$ are sampled i.i.d. from the distribution $\mu_z$. We now made these notations more precise in the paper. --- Rebuttal Comment 1.1: Comment: Thanks for your response!
Rebuttal 1: Rebuttal: The rebuttal pdf includes figures that are mentioned in some of our answers to the reviewers. Fig. 1.a analyses the sensitivity to $s$ of the correlation between the generalization error and $\mathrm{PMag}(s\mathcal{W})$. Fig. 1.b is a preliminary result regarding the comparison of our topological complexities with gradient variance. Fig. 2.a is a visual representation of some results of Table 1, it completes Fig. 1.c. in the paper. Fig. 2.b is a graphical representation of (part of) the proof of Theorem 3.4. Pdf: /pdf/3ffa351011b5822577f2d749647b66d0281a3496.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved Regret of Linear Ensemble Sampling
Accept (poster)
Summary: This paper proposes a simple ensemble sampling (ES) approach and its frequentist analysis for the linear bandit problem. Specifically, it shows that the proposed algorithm achieves $\tilde{O}(d^{3/2}\sqrt{T})$ regret when the ensemble size $m$ is $\Omega(K \log T)$. This regret bound improves the dependency on $d$ and the ensemble size compared to existing regret bounds for ES. In the regret analysis, it proposes a framework that can comprehensively handle various algorithms and a novel analytical technique to ensure the optimism of ES. It also shows that linear perturbed-history exploration (LinPHE) is a special case of ES. Strengths: - This paper proposes a simple ES, demonstrating that the proposed algorithm achieves $\tilde{O}(d^{3/2}\sqrt{T})$ regret when the ensemble size $m$ is $\Omega(K \log T)$. - This regret bound improves the dependency on $d$ compared to existing work. - Furthermore, this bound matches the regret bound achieved by the Thompson sampling. - This paper proposes a new analytical framework that can uniformly handle algorithms like ES, LinPHE, LinUCB, and LinTS (Theorem 2). - This paper introduces a novel theoretical analysis to ensure the optimism of ES (Lemma 2). - This paper shows for the first time the theoretical connection that LinPHE is a special case of ES. - This paper provides a detailed and comprehensive comparison with existing work. Weaknesses: - The theoretical analysis in this paper does not derive the regret upper bound for ensemble sampling in linear contextual bandits. - Additionally, the reason for this is not discussed in detail. Technical Quality: 3 Clarity: 3 Questions for Authors: While I do not verify all the proofs in detail, I believe this paper is deserving of acceptance. ### Questions - In my understanding, applying the theoretical analysis in this paper to linear contextual bandits is challenging due to the difficulty in extending Lemma 2, making it hard to derive the regret bound for ES. Is my understanding correct? ### Minor comments - Algorithm 2: $Z_{t.1}$ -> $Z_{t,1}$ Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of this paper in Appendix K. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback and particularly recognizing the theoretical value of our work. We are more than happy to address any questions. **Linear Contextual Bandits:** Please note that none of the existing linear ensemble sampling literature [15, 20, 8] has discussed linear contextual bandits with changing context in each round, if that is what you mean. Additionally, many linear bandit papers, such as [1, 3, 11], do not separately address the contextual case, even though their analyses are applicable to contextual cases. Therefore, we first addressed on the problem setting that the previous ensemble sampling literature has focused on — linear bandit. Your comment about Lemma 2 is correct, as it would not be directly applicable when the arm set, specifically the optimal arm, is given adversarially. Techniques in our work are developed to derive a sharper regret bound for linear bandits. It would be an interesting future direction to derive regret for the contextual setting. We will include a discussion on this. Thank you for pointing it out. --- Rebuttal 2: Title: Our contributions Comment: Dear Reviewer vWv8 We again thank you for recognizing the value of work. To put our contributions in perspective, let's compare the face value of our results with those in the relevant literature: | Paper | Regret Bound | Ensemble Size | Publication | |--------------------- |----------------------------------------------- |--------------- |------------------------------------------ | | Lu and Van Roy [15] | Invalid | Invalid | *NeurIPS 2017* | | Phan et al. [A] | Invalid (Lemma 13) | Invalid | *NeurIPS 2019* | | Qin et al. [20] | $O(\sqrt{dT\log{K}})$ **Bayesian** regret | $O(KT)$ | *NeurIPS 2022* | | Janz et al. [8] | $O(d^{5/2}\sqrt{T})$ **Frequentist** regret | $O(d\log{T})$ | - | | **Our work** | $O(d^{3/2}\sqrt{T})$ **Frequentist** regret | $O(K\log{T})$ | Currently under review at *NeurIPS 2024* | ##### (Phan et al. [A] contains an extended result on ensemble sampling in their appendix using the results from Lu and Van Roy [15], but since the latter is incorrect, so is the former). Even when considering just the face value of the results presented in each paper, overlooking our work would discourage further progress in the field of ensemble sampling. ### Key Points: - **Prior to our work, none of the previous studies in ensemble sampling (Lu and Van Roy [15], Phan et al. [A], Qin et al. [20], or Janz et al. [8]) had achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound with sublinear $T$ dependence on ensemble size.** - **Simply plugging in an ensemble size of $O(K \log T)$ into previous results (e.g., instead of $O(d \log T)$ as in Janz et al. [8]) does not trivially achieve the $O(d^{3/2}\sqrt{T})$ frequentist regret bound. In fact, regardless of how the ensemble size is chosen (whether $O(d \log T)$, $O(K \log T)$ or even $O(KT)$) and what algorithmic tweaks are applied (such as symmetrization in Janz et al. [8]), our $O(d^{3/2}\sqrt{T})$ bound remains the sharpest frequentist regret bound.** - **This is the key contribution of our work: No prior analysis in ensemble sampling had successfully achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound (with sublinear ensemble size in $T$). We presented and proved a novel method to reach this bound for the first time.** Even at face value, our results clearly stand out compared to the existing literature. We sincerely and kindly ask you to reconsider these points when evaluating our submission. Beyond the face value of our primary results, which we believe are already significant, we also introduce a general regret analysis framework (Theorem 2) for linear bandit algorithms. This framework not only generalizes the regret analysis for randomized algorithms like ensemble sampling but also applies to other optimism-based deterministic algorithms. This could be of significant interest beyond ensemble sampling, which we would like to bring your attention to. Considering the significant value of these contributions, we strongly believe in the value of our work and the potential impact for the future research. Thank you for your support. If you have any questions, please let us know. Sincerely, Authors --- **Reference:** - #### Phan et al. [A]: Phan, M., Abbasi Yadkori, Y., & Domke, J. (2019). Thompson sampling and approximate inference. Advances in Neural Information Processing Systems, 32. --- Rebuttal Comment 2.1: Comment: I appreciate the authors' responses. I remain of my opinion that the paper deserves to be accepted because it makes a significant theoretical contribution, but after reading other reviews, I believe that explicitly adding the following perspectives will help a broader readers understand this topic. **Comparison to Janz et al. [8]** - While this paper assumes a finite number of arms, Janz et al. [8] can handle an infinite number of arms. - When $d < K$, the upper bound on the required sampling size obtained in Janz et al. [8] can be smaller than that in this paper. **Comparison to LinTS** - While this paper assumes a finite number of arms, LinTS (e.g., Abeille and Lazaric [2]) can handle an infinite number of arms. - When focusing on $d$ and $T$, the regret bound of LinTS is $O(d^{3/2}\sqrt{T \log d}\log T)$, while the regret bound shown in this paper is $O((d \log T)^{3/2}\sqrt{T})$. - Since this is a minor difference, I believe that there is no problem in claiming that the regret bound obtained in this paper "matches" that of LinTS. - Reasons for the above differences. - In my understanding, the difference is caused by the decoupling technique (a novel technical tool by the authors) discussed in lines 222-230. --- Reply to Comment 2.1.1: Title: Thank you Comment: Thank you very much for your continued support! We will definitely add the discussion as suggested in the revision. We appreciate you recognizing the contributions of our work.
Summary: The paper explores ensemble sampling for linear bandit problems and enhances the existing regret bounds by a factor of $d$, aligning the scaling with respect to $d$ to that of Thompson sampling algorithms. The algorithm is somewhat simpler than the existing work [8] by permitting any policy for selecting the estimator, and without requiring the symmetry needed in [8]. Strengths: The paper improves the existing regret bounds for ensemble sampling by a factor of $d$. The analytical details are effectively presented, supplemented by several insightful remarks. Weaknesses: The scaling of the ensemble size with $K$ presents some issues. Utilizing a discretization argument over a continuous domain with a fine grid, where the distance between points is $O(1/\sqrt{T})$ — to preserve the regret bound, results in $K=O(T^{d/2})$ arms. This is polynomial in $T$ and exponential in $d$. Assuming a finite, small number of arms could be crucial for achieving the improved results. Therefore, it might not be fair to directly compare these results with those in [8], which seems applicable to continuous domains with many arms The abstract is challenging to read and understand because terms like 'K' and 'LinPHE' are used without prior introduction. Technical Quality: 3 Clarity: 3 Questions for Authors: Could the authors please comment on the scaling of the ensemble size with $K$? Is it appropriate to compare your results with those presented in [8]? Is a small $K$ crucial for obtaining the improved results? Additionally, what implications would there be in continuous domains or in general where $K$ is large? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, some limitations are discussed in Appendix K. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your overall positive feedback and particularly recognizing the theoretical value of our work. We sincerely hope you take our response into account in reassessing our work. --- ### **Scaling of $K$** We respectfully believe there is a misunderstanding in your comment. We are more than happy to clarify any confusion. We emphasize that **our regret bound does NOT depend on the number of arms $K$ at all, even logarithmically**. More crucially, **our regret does not have any dependence on the ensemble size $m$ at all (not even logarithmic)**. We do not assume the number of arms to be small, as long as it is finite (which is only needed for the ensemble size requirement). Even for instances with $K = O(T^{d/2})$ arms as you mention, our improved regret bound remains valid with the same order. Assuming you understand the key statistical efficiency (regret guarantees) independent of $K$ or $m$, which is our main focus, let us compare our result with Janz et al. [8]. The ensemble size shown as sufficient by Janz et al. [8] was $\Omega(d \log T)$ to derive the unfavorable $\tilde{O}(d^{5/2} \sqrt{T})$. Their regret bound depends super-linearly on the ensemble size $m$ (see Remark 3), which is counter-intuitive. Sure, Janz et al. [8]'s proposed ensemble size does not scale with $K$ (but still scales with the dimensionality $d$ which is a contrast with ours). However, what does it get you? Only a very sub-optimal and counter-intuitive result despite a more complicated algorithm with symmetrization. Furthermore, consider when $d$ is higher—the situation gets even worse, both for the ensemble size and especially the regret bound with much worse super linear dependence $O(d^{5/2})$. On the other hand, we present a new way of treating ensembles with $\Omega(K \log T)$ (using novel analysis techniques) and achieve $\tilde{O}(d^{3/2} \sqrt{T})$, matching the worst-case regret of LinTS for the first time! We propose a simple algorithmic approach (with a new way of looking at the ensemble) and significantly novel analysis (compared to [8]) to improve the regret of ensemble sampling. Therefore, it is absolutely appropriate to compare our results with those in [8]. We believe we should welcome any new way of implementing ensemble sampling if it can provide improved regret guarantees. --- Thank you for your feedback on the abstract. We will make the necessary edits in the final version. --- Rebuttal 2: Title: Our contributions Comment: Dear Reviewer TPGY, To put our contributions in perspective, let's compare the face value of our results with those in the relevant literature: | Paper | Regret Bound | Ensemble Size | Publication | |--------------------- |----------------------------------------------- |--------------- |------------------------------------------ | | Lu and Van Roy [15] | Invalid | Invalid | *NeurIPS 2017* | | Phan et al. [A] | Invalid (Lemma 13) | Invalid | *NeurIPS 2019* | | Qin et al. [20] | $O(\sqrt{dT\log{K}})$ **Bayesian** regret | $O(KT)$ | *NeurIPS 2022* | | Janz et al. [8] | $O(d^{5/2}\sqrt{T})$ **Frequentist** regret | $O(d\log{T})$ | - | | **Our work** | $O(d^{3/2}\sqrt{T})$ **Frequentist** regret | $O(K\log{T})$ | Currently under review at *NeurIPS 2024* | ##### (Phan et al. [A] contains an extended result on ensemble sampling in their appendix using the results from Lu and Van Roy [15], but since the latter is incorrect, so is the former). Even when considering just the face value of the results presented in each paper, overlooking our work would discourage further progress in the field of ensemble sampling. ### Key Points: - **Prior to our work, none of the previous studies in ensemble sampling (Lu and Van Roy [15], Phan et al. [A], Qin et al. [20], or Janz et al. [8]) had achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound with sublinear $T$ dependence on ensemble size.** - **Simply plugging in an ensemble size of $O(K \log T)$ into previous results (e.g., instead of $O(d \log T)$ as in Janz et al. [8]) does not trivially achieve the $O(d^{3/2}\sqrt{T})$ frequentist regret bound. In fact, regardless of how the ensemble size is chosen (whether $O(d \log T)$, $O(K \log T)$ or even $O(KT)$) and what algorithmic tweaks are applied (such as symmetrization in Janz et al. [8]), our $O(d^{3/2}\sqrt{T})$ bound remains the sharpest frequentist regret bound.** - **This is the key contribution of our work: No prior analysis in ensemble sampling had successfully achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound (with sublinear ensemble size in $T$). We presented and proved a novel method to reach this bound for the first time.** Even at face value, our results clearly stand out compared to the existing literature. We sincerely and kindly ask you to reconsider these points when evaluating our submission. Beyond the face value of our primary results, which we believe are already significant, we also introduce a general regret analysis framework (Theorem 2) for linear bandit algorithms. This framework not only generalizes the regret analysis for randomized algorithms like ensemble sampling but also applies to other optimism-based deterministic algorithms. This could be of significant interest beyond ensemble sampling, although it seems to have been overlooked in the reviews. Considering the significant value of these contributions, we strongly believe our work deserves more recognition than just a "Borderline accept." We are eager to clarify any points further. Please feel free to reach out with any questions. Sincerely, Authors --- **Reference:** - #### Phan et al. [A]: Phan, M., Abbasi Yadkori, Y., & Domke, J. (2019). Thompson sampling and approximate inference. Advances in Neural Information Processing Systems, 32.
Summary: This paper proposes a neater version of linear ensemble sampling and streamlines the analysis of OFUL-inspired algorithms for linear bandits. The authors proved that this version of linear ensemble sampling has its high-probability regret upper bound the same order as LinTS, i.e., $\tilde{O}(d^{3/2}\sqrt{T})$ so long as the ensemble size is at least linear in $K$ (the cardinality of the arm set) and logarithmic in $T$. Their analysis introduces a lightweight way for the community to deal with the dependency between the sequence of perturbations and the sequences of selected arms while preserving a sublinear ensemble size. This paper also tries to subsume linear perturbed-history exploration under the streamlined analysis framework. Strengths: 1. The proposed algorithm is the first to achieve the same statistical performance as LinTS up to logarithmic factors. 2. The proposed analysis framework in Theorem 2 identifies key components in OFUL-inspired algorithms, which is of broader interest to the community of online learning. 3. The reformulations (of Algorithm 1) made in the analysis are insightful and easy to follow, especially the reindexed viewpoint of the sequence of selected arms in the proof of Claim 1. Weaknesses: 1. The presentation in Section 6 is very confusing. As far as I can tell, Theorem 1 requires the ensemble size to be at least linear in $K$, and the justification of Claim 1 in the proof of Theorem 1 relies on $K \leq \infty$ at least superficially. However, the authors claim that, Corollary 1, as a corollary of Theorem 1, can implies a regret bound for LinPHE "under the infinite arm setting" in Line 349 and even in the Abstract. It could be highly possible that both Theorem 1 and Corollary 1 are correct, but the authors should clarify the relationship between the two results further. I am willing to reconsider my evaluation if the authors can provide a more detailed and satisfactory explanation. 2. Though this paper is mathematically significant, it is not rigorously clear why should we prefer linear ensemble sampling over LinTS if we focus on the linear bandits setting (instead of more complex models like neural networks). For example, the authors mention that it might be intractable to compute the posterior of LinTS and Theorem 1 should be applicable to perturbations following "any symmetric non-degenerate subGaussian distribution"; then mathematically speaking, linear ensemble sampling should be preferred over LinTS **only if** there does exist a symmetric non-degenerate subGaussian distribution (serving as the perturbation in linear ensemble sampling) whose counterpart in LinTS has an intractable posterior. However, the authors do not provide any concrete examples or discussions on this point. I am willing to reconsider my evaluation if the authors can provide a more detailed and satisfactory explanation. 3. Line 244: There seems to be certain typo in the indices range for $\eta_t$. 4. The presentation between Line 527 and Line 528 is inconsistent with the definition of $N_{k,t}$. (Back to front) 5. Minor typo in Line 240: "lienar" -> "linear". 6. Minor typo between Line 480 and Line 481: $\beta_t$ -> $\beta_{t-1}$. 7. Minor typo in the proof of Lemma 4: $\sqrt{d} + \sqrt{...}$ -> $\sqrt{d} + \sqrt{2} \cdot \sqrt{...}$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The ensemble size $m$ in Theorem 1 depends on $T$, which seemingly makes the algorithm inherently incompatible with double-trick-style techniques and thus less practical, right? 2. See the second point in the Weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your overall positive feedback and particularly recognizing the theoretical value of our work. We strongly and sincerely believe that the first two points you mentioned as weaknesses are not weaknesses. We sincerely hope you take our response into account in reassessing our work. --- ### **Corollary 1 and Theorem 1** Although Theorem 1 shows that an ensemble size of $\mathcal{O}(K \log T)$ is sufficient to guarantee an $\mathcal{O}(\sqrt{T})$ regret bound, it does not state that an ensemble size smaller than $\mathcal{O}(K \log T)$ must fail. This is a sufficient condition, not a necessary one. Please note that Corollary 1 derives a regret bound for LinPHE that is independent of the number of arms. What Corollary 1 highlights is that an ensemble of size $T$ with a round-robin selection rule is also sufficient to attain the same regret bound, following the same logic as Theorem 1. We strongly believe this is a novel finding. The proof of Corollary 1 follows the analysis framework of Theorem 1 but skips the latter part, including the use of Claim 1. This adjustment makes it applicable to the infinite arm setting for LinPHE (or the arm-independent regret bound of LinPHE even for the finite arm setting). The latter part of the proof in Theorem 1, particularly Claim 1, focuses on decoupling the dependency between the sequence of selected arms and perturbations, which are already independent in the case of LinPHE. We refer to the discussion in Corollary 1, especially the last few sentences. Appendix G, where we present a concise proof of Corollary 1 and provide additional discussion, should also help resolve your concerns. We are more than happy to include a more detailed discussion about this point. --- ### **Ensemble Sampling and LinTS** First of all, thank you for acknowledging the mathematical significance of our work. Regarding your question on whether one should prefer ensemble sampling over LinTS, let's review what was known prior to our work, which we believe the reviewer is well aware of. - **Lu and Van Roy [15]** attempted to provide a regret analysis of ensemble sampling but admitted to an **erroneous analysis**, invalidating their results. - **Qin et al. [20]** analyzed the Bayesian regret, a weaker notion of regret than the frequentist regret analyzed in our work. Moreover, their result required the **ensemble size to be linear in $T$**, which is highly prohibitive in practice. - **Janz et al. [8]** showed that the ensemble size only scales logarithmically with $T$, an improvement over Qin et al. [20]. However, their regret bound is $\mathcal{O}(d^{5/2}\sqrt{T})$, with an **additional $d$ dependence** compared to linear Thompson Sampling [2, 3], and a **super-linear dependence on the ensemble size $m$**. This result is not only loose but also counter-intuitive (the larger the ensemble size, the worse the performance, raising the question of why to use ensemble sampling). **Prior to our work:** Hence, if you had asked the same question prior to our work, the answer would have been: there is no theoretical evidence that ensemble sampling can perform on the same level as LinTS in the worst case, as the worst-case regret guarantees [8] were worse than those of LinTS. Furthermore, the previously proposed algorithm's algorithmic complexity and super-linear dependence on the ensemble size $m$ in the regret bound made the use of a larger ensemble size impractical. **Our work:** Our work provides the first theoretical evidence that, even in the worst case, ensemble sampling can be a viable alternative to LinTS, matching the same regret bound of $\mathcal{O}(d^{3/2}\sqrt{T})$ (with no dependence on $m$ in the regret bound at all) for the first time. Please note that, as mentioned in the paper, ensemble sampling has already been used in practice and exhibits competitive performance in complex settings [16, 17, 18, 19, 23, 24]. However, the theoretical study of its statistical behavior has been lagging behind, not even in the linear bandit setting [15, 20, 8]. As the linear bandit problem serves as a foundational framework for many other problems, an improved analysis of linear ensemble sampling is absolutely necessary, particularly because existing studies have shown clearly inferior results compared to LinTS either in the regret bounds or in the requirements for ensemble size. Regarding the "intractable posterior", our statement refers to more "complex problems" (we did not mention LinTS is intractable in the linear model assumption). What is important is that TS in more general and complex problem settings is much more difficult to implement, whereas ensemble sampling is easily generalizable to more complex problems. --- ### **[W3-7] Minor Typos** We appreciate your thorough inspection of the manuscript. We will make edits incorporating your feedback. --- ### **Answer to Question: Double-trick?** One can apply the doubling trick by restarting the algorithm with doubling periods, each time with a larger ensemble. The regret bound we obtained should still be valid when the doubling trick is employed, possibly with some larger constant factors, but the main order should remain the same. Of course, as you are well aware, the doubling trick has its own drawbacks, but these apply to the previous algorithm in Janz et al. [8] as well. This is not a particular issue with our algorithm. --- Rebuttal 2: Title: Our contributions Comment: Dear Reviewer QJqu, To put our contributions in perspective, let's compare the face value of our results with those in the relevant literature: | Paper | Regret Bound | Ensemble Size | Publication | |--------------------- |----------------------------------------------- |--------------- |------------------------------------------ | | Lu and Van Roy [15] | Invalid | Invalid | *NeurIPS 2017* | | Phan et al. [A] | Invalid (Lemma 13) | Invalid | *NeurIPS 2019* | | Qin et al. [20] | $O(\sqrt{dT\log{K}})$ **Bayesian** regret | $O(KT)$ | *NeurIPS 2022* | | Janz et al. [8] | $O(d^{5/2}\sqrt{T})$ **Frequentist** regret | $O(d\log{T})$ | - | | **Our work** | $O(d^{3/2}\sqrt{T})$ **Frequentist** regret | $O(K\log{T})$ | Currently under review at *NeurIPS 2024* | ##### (Phan et al. [A] contains an extended result on ensemble sampling in their appendix using the results from Lu and Van Roy [15], but since the latter is incorrect, so is the former). Even when considering just the face value of the results presented in each paper, overlooking our work would discourage further progress in the field of ensemble sampling. ### Key Points: - **Prior to our work, none of the previous studies in ensemble sampling (Lu and Van Roy [15], Phan et al. [A], Qin et al. [20], or Janz et al. [8]) had achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound with sublinear $T$ dependence on ensemble size.** - **Simply plugging in an ensemble size of $O(K \log T)$ into previous results (e.g., instead of $O(d \log T)$ as in Janz et al. [8]) does not trivially achieve the $O(d^{3/2}\sqrt{T})$ frequentist regret bound. In fact, regardless of how the ensemble size is chosen (whether $O(d \log T)$, $O(K \log T)$ or even $O(KT)$) and what algorithmic tweaks are applied (such as symmetrization in Janz et al. [8]), our $O(d^{3/2}\sqrt{T})$ bound remains the sharpest frequentist regret bound.** - **This is the key contribution of our work: No prior analysis in ensemble sampling had successfully achieved the $O(d^{3/2}\sqrt{T})$ frequentist regret bound (with sublinear ensemble size in $T$). We presented and proved a novel method to reach this bound for the first time.** Even at face value, our results clearly stand out compared to the existing literature. We sincerely and kindly ask you to reconsider these points when evaluating our submission. Beyond the face value of our primary results, which we believe are already significant, we also introduce a general regret analysis framework (Theorem 2) for linear bandit algorithms. This framework not only generalizes the regret analysis for randomized algorithms like ensemble sampling but also applies to other optimism-based deterministic algorithms. This could be of significant interest beyond ensemble sampling, although it seems to have been overlooked in the reviews. Considering the significant value of these contributions, we strongly believe our work deserves more recognition than just a "Borderline accept." We are eager to clarify any points further. Please feel free to reach out with any questions. Sincerely, Authors --- **Reference:** - #### Phan et al. [A]: Phan, M., Abbasi Yadkori, Y., & Domke, J. (2019). Thompson sampling and approximate inference. Advances in Neural Information Processing Systems, 32. --- Rebuttal 3: Title: Borderline Accept -> accept Comment: Thank you for your nice rebuttal. The authors have sufficiently emphasized the significance of their work and addressed my mathematical concern on Corollary 1. Given the line of (precedent) theoretical works on ensemble sampling, and the technical contributions of this paper, I have raised my score to 7. - By the way, I kindly encourage the authors to delete the last column of the penultimate line of the markdown table in their rebuttal text in that "pointing out that certain cited paper, i.e., xxx et al. has been submitted to XXX 2024" does not quite conform to the double-blind principle. --- Rebuttal Comment 3.1: Title: Thank you Comment: Thank you very much for recognizing the value of our work! We have modified our comments.
Summary: An analysis of Ensemble Sampling in the linear setups with closed-form incremental update and finite action set. Strengths: Provide a new analysis of linear Ensemble sampling. Weaknesses: 1. **Lack of Practical Implications for Ensemble Sampling in Complex Settings:** - The paper does not discuss how Ensemble Sampling can be effectively applied in complex, real-world scenarios. Practical considerations and potential challenges are not addressed, which limits the practical utility of the proposed approach. 2. **Over-Claims:** - The authors claim to close the gap between theory and practice by providing an improved regret bound for linear ensemble sampling. However, they do not clearly define this gap. The paper lacks simulations or rigorous arguments to support the existence or closure of this gap. For instance, the paper does not present practical scenarios or empirical results that highlight the gap. 3. **Incorrect Claims:** The authors incorrectly claim that their work matches the state-of-the-art results for randomized linear bandit algorithms. Specifically: - The best-known regret bound for exact Thompson Sampling (TS) in a finite-arm (K actions) setting is $O(d \sqrt{T \log K \log T})$ [1]. - The paper proves a regret bound for linear Ensemble Sampling of $O((d \log T )^{3/2} \sqrt{T})$. - This bound does not match the state-of-the-art results or even the exact TS bound. - Additionally, exact TS is not the state-of-the-art for randomized linear bandit algorithms in terms of regret bound. - **The authors' claim that their result matches state-of-the-art randomized linear bandit algorithms is incorrect** as their bound does not even match the exact TS. [1] Agrawal, S., & Goyal, N. (2013, May). Thompson sampling for contextual bandits with linear payoffs. In International conference on machine learning (pp. 127-135). PMLR. 4. **Mischaracterization of Perturbed History Exploration (PHE):** - In linear Ensemble sampling, $J_t$ sampled from uniform distribution. - Perturbed history exploration can be viewed as deterministic choosing $T$ different ensemble models with entirely independent set of perturbed noise one by one. This observation is **not** new. - Perturbed history exploration is not a special case of the linear ensemble sampling algorithm as one is a deterministic algorithm and the other is randomized algorithm. - Therefore, it is **not** valid to use the result from LinPHE to claim that "linear Ensemble sampling use $\min (K \log T, T )$ ensembles, allowing the number of arms to be infinite". Technical Quality: 1 Clarity: 3 Questions for Authors: 1. See weakness. 2. Can you provide more in-depth justification of assumption you made in line 270 for the proof of theorem 1. Intuitively, it seems right by sampling these R.V. in the beginning whose randomness are from algorithm side. But it needs rigiours justification. - The random variables $\{W^j\}$ are sampled independently in the beginning. - $J_t$ and $Z_{t}^j$ are independent of the history before step t. - However, (1) $(X_s, Y_s)$ depend on $J_t$ for $s \ge t$. - and (2) $(X_s, Y_s)$ depend on $Z_t^j$ for $s > t$. 3. Can you provide empirical evidence to support the scaling on ensemble size? Is it really scaling linearly with size of action set $K$? What is the scaling factor with ambient dimension $d$? **What theory-practice gap are you closing?** Confidence: 5 Soundness: 1 Presentation: 3 Contribution: 1 Limitations: Any implication on scalable posterior sampling in complex setting? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you have invested in evaluating our work. However, we believe there are several fundamental misunderstandings in your review that we would like to address. **The theory-practice gap:** Variants of ensemble sampling have already been shown to perform effectively in practice, such as in online recommendation [16,24,23] and deep reinforcement learning [17,18,19]. However, the theoretical understanding of ensemble sampling has been lagging behind, even for the linear bandit problem, as recognized by previous theoretical works. - Lu and Van Roy [15] attempted to provide a regret analysis of ensemble sampling but admitted to an **erroneous analysis** that invalidates their results (see [15](https://arxiv.org/pdf/1705.07347)). - Qin et al. [20] analyzed the Bayesian regret, a weaker notion of regret than the frequentist regret analyzed in this work. More importantly, their result requires that the **ensemble size be linear in $T$**, which is highly prohibitive in practice. - Janz et al. [8] showed that the ensemble size only scales logarithmically with $T$, an improvement over [20] on the requirement for the ensemble size. However, their regret bound is unsatisfactory $\mathcal{O}(d^{5/2}\sqrt{T})$ with an **additional $d$ dependence** compared to linear Thompson Sampling [2,3], and a **super-linear dependence on the ensemble size $m$ in the regret bound**. That is, their result is not only loose but also very counterintuitive (the larger the ensemble size, the worse the performance, which questions the motivation of ensemble sampling). To summarize, there was no frequentist (worst-case) regret bound matching that of other comparable linear TS (or LinPHE) with the number of ensembles not scaling linearly with $T$. **Despite previous attempts, there has been a significant gap**. Our work aims to close this gap by demonstrating the **first non-vacuous theoretical guarantee** of linear ensemble sampling whose **regret bound matches that of LinTS for the first time** and ensemble size is still **logarithmic in $T$**, **without any performance degradation as the ensemble size grows**. We provide this tighter regret bound through new analysis and with an even simpler algorithm than that of Janz et al. [8]. --- **Correct Claim:** It is crucial to acknowledge that both $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$ and $\tilde{\mathcal{O}}(d\sqrt{T \log K})$ bounds are well-established in the finite-arm setting [3], and neither is inherently superior. - **FACT**: $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$ is a valid best-known upper bound of LinTS [2,3], even for the finite-arm setting. For instance, when $d=2$ and $K=8$, $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$ is clearly smaller than $\tilde{\mathcal{O}}(d \sqrt{T \log K})$ for any $T$. Experts in the bandit literature agree that $\mathcal{O}(d^{3/2}\sqrt{T})$ (for the $K$-independent bound—here "$K$-independent" indicates that the regret is independent of $K$, not implying an infinite arm setting) is the best bound for TS-type (not UCB-type or hybrid with UCB) algorithms in linear bandits. Thus, **we respectfully disagree with the assertion that our regret bound is worse than Thompson Sampling**. We sincerely hope that the reviewer and we can agree on this well-accepted fact within the community. **Our regret bound for linear ensemble sampling matches that of LinTS for the first time**, marking a significant contribution to the field. --- **Relationship between ensemble sampling and LinPHE:** Although previous studies of ensemble sampling have fixed the ensemble sampling distribution to be merely uniform, it turns out that **uniform distribution is not necessary**. - **FACT**: The sampling distribution in ensemble sampling does not have to be uniform. For instance, even if some models' probabilities are increased by a constant factor (and others' decreased), both our analyses and previous work remain valid, achieving the same order of regret bounds. **To our best knowledge, the relationship between perturbed history exploration (PHE) and ensemble sampling has not been rigorously discussed on either side**. If you could provide any references revealing the relationship between ensemble sampling and LinPHE as argued in your review, we would appreciate it. As mentioned earlier, uniform sampling is not a necessary characteristic of ensemble sampling. Therefore, we take a broader perspective of linear ensemble sampling (Algorithm 1) to include any sampling policy, rather than restricting it to a uniform distribution. This generalization reveals a significant connection between LinPHE and ensemble sampling, allowing us to use ensemble sampling analysis techniques to derive a new regret bound for LinPHE. Most importantly, our Corollary 1 does **not** "use the result from LinPHE to claim" anything. On the contrary, we apply our analysis framework for linear ensemble sampling **to** analyze LinPHE, deriving a new regret bound. With the generalized algorithm, Proposition 1 and Corollary 1 together imply that an ensemble size greater than $T$ is superfluous, as a round-robin selection of the ensemble models guarantees the same regret bound as Theorem 1. This result opens up new possibilities for ensemble sampling to adopt alternate sampling policies, demonstrating that policies other than uniform sampling can also achieve strong guarantees. This expansion should not be viewed as a weakness of our work; rather, it should be recognized as a significant contribution. Even if the reviewer were to restrictively define ensemble sampling to be associated with uniform distribution only (contrary to the evidence above), the fact that we can apply our novel regret analysis framework to derive a new and unified analysis of both LinPHE and ensemble sampling (two "different randomized algorithm classes" as one wishes to believe) would be even more interesting. Regardless, this new result will be of independent interest to the community. --- Rebuttal 2: Comment: ### Answer to Question : Line 270 First, we would like to clarify that line 270 is only redefining the $\sigma$-algebra $\mathcal{F}_0^A$ to simplify the analysis—this does not alter the algorithm or the problem setting. Sampling all the perturbation values in advance is just an intuitive explanation of this measure, which need not actually take place in the execution of the algorithm. Furthermore, we have already rigorously justified the equivalence of the algorithm under a more complex modification in Appendix F, which we referred in case where the reader expects a full, rigorous justification. To briefly summarize, the algorithm remains equivalent since the perturbation values follow independent Gaussian distributions conditioned on appropriately defined history, whether they are sampled at the beginning or later. We believe that the "sampling i.i.d. R.V.s in the beginning" viewpoint is well-known; for instance, see the description of the *stack of rewards* model on page 65 of [13]. Note that we do not include the randomness of $j_t$ in $\mathcal{F}_0^A$, so we find your point (1) irrelevant. Your point (2) also does not pose any problems since the dependency on future interactions stays the same regardless of when $Z_t^j$ is sampled. --- Rebuttal 3: Title: Concerns on the misleading arguments Comment: Dear Authors, Thank you for your detailed rebuttal. While I appreciate your efforts to clarify your contributions, I still have concerns about some of the claims and comparisons made in your response. 1. **Comparison with Previous Ensemble Sampling (ES) Analysis:** Your justification for the dependency on $K$ and $d$ is not sufficiently rigorous. A proper comparison of complexity bounds must consider all dominant factors across different regimes. It's only valid to claim superiority in specific regimes of these factors. For instance, consider the following scenarios: a) For $d = 2$ and $K = 8$ (omitting constants and logarithmic terms): - Your analysis: ES with ensemble size 8 achieves a regret bound of $2^{3/2}\sqrt{T}$. - ES [Janz et al., 2023]: ES with ensemble size 2 achieves a regret bound of $2^{5/2}\sqrt{T}$. b) For $d = 2$ and $K = 10000$: - Your analysis: ES with ensemble size 10000 achieves a regret bound of $2^{3/2}\sqrt{T}$. - ES [Janz et al., 2023]: ES with ensemble size 2 achieves a regret bound of $2^{5/2}\sqrt{T}$. These comparisons highlight that **it's misleading to claim your analysis leads to improved results compared to ES [Janz et al., 2023].** The improvement appears to be regime-dependent, and this nuance should be clearly stated. 2. **Ensemble Size Scaling:** regard the usefulness of the analysis The implication that the ensemble size needs to **scale linearly with $K$ is counter-intuitive** and potentially makes the ES algorithm impractical for large action spaces. This requirement significantly limits the usefulness of ES in real-world scenarios with large $K$. **Empirical evidence of how the ensemble size scales with $K$ in practice would be valuable to support your theoretical claims and demonstrate real-world applicability.** From Author rebuttal pdf, it looks like the empirical scaling of $K$ is with a substantially lower order of $O(K)$. As from this empirical evidence, we might not say the analysis in this work -- leading to $\tilde{O}(K)$ scaling -- justifies the usefulness of ES. **And more importantly, it is misleading to claim that theory-practice gap is closed.** 3. **Comparison with Exact TS:** To claim that your regret order matches exact TS, your analysis should match or improve upon the existing analysis of exact TS across all regimes. A more comprehensive result would be of the form with the same ensemble size configuration: $$\min(\tilde{\mathcal{O}}(d^{3/2}\sqrt{T}), \tilde{\mathcal{O}}(d\sqrt{T \log K}))$$ This would demonstrate matching performance across different regimes of $d$ and $K$. **Without such a result, the claim of matching exact TS performance is overstated and misleading.** 4. **Practical Implications and Theory-Practice Gap:** While you mention practical applications of ES variants in online recommendation and deep reinforcement learning, your current analysis doesn't directly address how these theoretical improvements translate to practical benefits in these complex settings. To substantiate your claim of closing the theory-practice gap, it would be beneficial to provide: a) Empirical results demonstrating the practical advantages of your approach over existing analysis. Does your analysis provides more accurate characterization of practical performance? b) A clear explanation of how your theoretical improvements address specific challenges in real-world applications. In conclusion, while your work provides interesting insights into ES, the claims of matching exact TS performance and improving upon previous ES analyses need more nuanced and regime-specific statements. I encourage you to: 1. Refine your comparisons to accurately reflect the specific conditions under which your results hold and improve upon existing work, for both computation and regret considerations. 2. Provide matching results between empirical evidence and theoretical claims, especially regarding the scaling of ensemble size with $K$ in the future revision if you want to claim closing theory-practice gap. 3. Clarify how your work concretely **addresses the theory-practice gap** in ensemble sampling applications. These additions would significantly strengthen your paper and provide a more comprehensive understanding of your contributions to the field. Sincerely, Reviewer QBvS --- Rebuttal 4: Title: Fundamental Disagreements with and Concerns about Reviewer QBvS's comments (1/3) Comment: Dear Reviewer QBvS, We appreciate your very quick (in about 2 hours) and enthusiastically prepared responses to our rebuttal. We are grateful for the opportunity to discuss our work with you. We sincerely hope we can communicate with rationality and reason, without unnecessarily hurting others. We respectfully urge you to reconsider your evaluation of our work. **Before accusing us of misleading or mischaracterizing our research, we sincerely ask that you reflect on whether you might be unintentionally (or intentionally, which would be a serious ethical issue) misinterpreting or misrepresenting our hard and proud work**. With all due respect, we fundamentally disagree with your comments on almost every point. However, we understand that there can be disagreements, and we sincerely hope you also understand this. Our goal is to find some common ground through constructive dialogue. First, let us clarify some fundamental points. We have a few simple yet crucial questions: - ### **Q1: Prior to our work, do you agree that none of the previous works in ensemble sampling (Lu and Van Roy [15], Qin et al. [20], or Janz et al. [8]) had achieved $O(d^{3/2}\sqrt{T})$ frequentist regret with sublinear $T$ dependence on ensemble size?** - The answer to **Q1** should be **Yes**. When we began this research, our goal was to prove that ensemble sampling could achieve $O(d^{3/2}\sqrt{T})$ with sublinear dependence on $T$. Achieving this rate has long been considered an open problem. We achieved this goal with a new way of treating ensembles and a novel analysis technique, resulting in the tightest known frequentist regret bound for ensemble sampling. - ### **Q2: Do you agree that by simply plugging in an ensemble size of $O(K \log T)$ (instead of $O(d \log T)$ as in Janz et al. [8]), the previous analysis does not trivially achieve $O(d^{3/2}\sqrt{T})$ regret?** - The answer to **Q2** should be **Yes**. Since the previous analysis does not allow for a tighter regret, we devised novel techniques to derive a new and improved bound. - ### **Q3: Setting aside how ensemble size is chosen (whether $O(d \log T)$ or $O(K \log T)$) and what algorithmic tweaks are applied, do you agree that $O(d^{3/2}\sqrt{T})$ regret is an improved and currently the sharpest regret in terms of $d$ and $T$ (with no dependence on $K$)?** - The answer to **Q3** should be **Yes**. ### All three questions (Q1-Q3) above are not rhetorical. They are simple binary questions. If your answers to all three questions are "Yes," we can further discuss our paper. Otherwise, we would be very concerned that we may not be on the same page at a fundamental level. This is the **key point of our contribution: prior to our work, no previous works in ensemble sampling had provided a successful analysis to achieve $O(d^{3/2}\sqrt{T})$ regret. We presented and proved a new way of doing it**. We feel that there is mischaracterization in your comments in "1. Comparison with Previous Ensemble Sampling (ES) Analysis." We have been very clear about the bound on the ensemble size. Using your logic, one could argue the opposite by comparing different values of $d$ and $K$. However, such comparison is NOT our main point. Our main goal is simple yet fundamental: **whether one can get $O(d^{3/2}\sqrt{T})$ regret with sublinear (even logarithmic) dependence on $T$ in ensemble size**. This had been an open question, and **we aimed to close this well-defined gap!** You are imposing your own definition or agenda of a "gap" -- which is not well-defined nor do we have any reason to commit to -- onto our goal. If your intention is not adversarial, please refrain from doing so. ### **FACT:** Our analysis of ensemble sampling with $O(K \log T)$ ensemble size achieves $O(d^{3/2}\sqrt{T})$ frequentist regret for the first time. Previous approaches, with any choice of ensemble size, did not achieve this. That is a fact. Our resulting $O(d^{3/2}\sqrt{T})$ regret is smaller than $O(d^{5/2}\sqrt{T})$. If that is not an improved regret, then what else is it? Can Janz et al. [8] achieve $O(d^{3/2}\sqrt{T})$ regret with any modification to their analysis and algorithm? No! We do not understand your motivation of earnestly defending the result of Janz et al. [8] and adversarially discrediting our contribution by distorting the facts. --- ## **On Ensemble Size** We respectfully disagree with your argument here. Even your comment that "The implication that the ensemble size needs to scale linearly is counter-intuitive" is simply false. There is NO implication that the ensemble size “needs to scale linearly” because the ensemble size condition is a **sufficient condition**, not a **necessary condition**. We are sure you as an expert understand the difference. What you are asking for is for us to show that what is only proven to be sufficient is somehow empirically manifested as necessary. Your assertion is problematic and what you request of us as empirical evidence is unreasonable. --- Rebuttal Comment 4.1: Title: The Sharpest Regret Bound of Ensemble Sampling is by ES[Qin et al., 2023] if setting aside how ensemble size is chosen Comment: ES [Qin et al., 2023] achieves regret bound $\sqrt{d T \log K}$ for finite action setups with $K$-arms, which matches the regret bound of Thompson Sampling in finite action set setups [Russo and Van Roy, 2016]. Quote from the remark below Theorem 1 in ES[Qin et al., 2023]. ``` Comparison with the regret bound for TS The regret bound in Theorem 1 consists of two terms. The first term $(a)$ is exactly the regret bound achieved by TS with exact posterior [Russo and Van Roy, 2016]. Since the action set size is $K$, the entropy of the optimal action $\mathbb{H}\left(A_*\right) \leq \log K$, and as discussed in Russo and Van Roy [2016], when the prior is informative, $\mathbb{H}\left(A_*\right) \ll \log K$. Therefore, the first term $(a)$ is order optimal and improves upon the worst-case regret bound achieved by other algorithms, e.g., upper confidence bound type algorithms. On the other hand, the second term (b) is an incremental term and accounts for posterior mismatch. Note that as the ensemble size $M$ approaches infinity, the second term $(b)$ converges to zero and our regret bound for ES reduces to that for TS with exact posterior. Moreover, as long as the ensemble size $M$ reaches $K T / d$, our regret bound for ES has the same order as that for TS in terms of $T$ and $d$ (up to logarithmic factors). ``` However, it requires $M$ reaches $K T / d$, which is not satisfiable to justify the need for Ensemble Sampling. --- Rebuttal 5: Title: Concerns about Reviewer QBvS's comments (Continued 2/3) Comment: ## **On Matching Regret of LinTS** Another very concerning assertion is forced by Reviewer QBvS, which we truly believe that requires the wisdom of the community. Reviewer QBvS argues that in order to claim that our result matches the regret bound of LinTS [2, 3] we need to match both $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K}) )$ "across different regimes of $d$ and $K$" and states without it, the claim is “overstated and misleading.” Who sets such a rule? (By the way, what is “exact TS” that the reviewer keeps referring to anyway? We never used it in our manuscript. We only want to match with the regret of LinTS, particularly $\tilde{O}(d^{3/2}\sqrt{T})$ that was the goal to start with.) **Such an assertion is contrary to what is practiced in the literature. As many of us know, it is conventionally accepted that if you match one of the terms (not both), it is said that you have the same order and matching bound as LinTS.** Let us show some examples in the literature: - “this bound is of order $\tilde{O}(d^{3/2}\sqrt{T})$ and it **entirely matches** the result of Agrawal and Goyal [2012b].” Abeille, M., & Lazaric, A. (2017,). Linear thompson sampling revisited. In International Conference on Artificial Intelligence and Statistics (pp. 176-184). PMLR. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “[regret] of GP-TS is $\tilde{O}(d^{3/2}\sqrt{T})$, which **recovers** the regret bounds of their linear, parametric analogues … Linear Thompson sampling (Agrawal and Goyal, 2013)” Chowdhury, S. R., & Gopalan, A. (2017). On kernelized multi-armed bandits. In International Conference on Machine Learning (pp. 844-853). PMLR. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “Theorem 2 establishes $\tilde{O}(d^{3/2}\sqrt{T})$ worst-case regret, which **matches** the regret bounds of TS methods for linear contextual bandits [Agrawal and Goyal 2013, Abeille et al. 2017] up to logarithmic factor.” Oh, M. H., & Iyengar, G. (2019). Thompson sampling for multinomial logit contextual bandits. Advances in Neural Information Processing Systems, 32. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “We note that the regret of Term I ( $\tilde{O}(d^{3/2}\sqrt{T})$) has the **same bound** as that of Abeille et al. (2017)” Moradipari, A., Thrampoulidis, C., & Alizadeh, M. (2020). Stage-wise conservative linear bandits. Advances in neural information processing systems, 33, 11191-11201. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “Agrawal and Goyal [2013b] also extended the analysis of multi-armed Thompson Sampling to the linear contextual setting and proved a regret bound of $\tilde{O}(d^{3/2}\sqrt{T})$ where d is the dimension of the context vectors. In this paper, we develop the first variants of Batch Thompson Sampling that **achieve the aforementioned regret bounds**.” Karbasi, A., Mirrokni, V., & Shadravan, M. (2021). Parallelizing thompson sampling. Advances in Neural Information Processing Systems, 34, 10535-10548. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “Theorem 3.5 implies the regret of NeuralTS is on the order of $\tilde{O}(\tilde{d}\sqrt{T})$ (here there is additional $\log K$ factor hidden in $\tilde{O}$, hence it is $\tilde{O}(d\sqrt{T \log K})$),. This result **matches** the state-of-the-art regret bound in Chowdhury & Gopalan (2017); Agrawal & Goyal (2013); Zhou et al. (2019); Kveton et al. (2020).” Zhang, W., Zhou, D., Li, L., & Gu, Q. (2021). Neural thompson sampling. ICLR 2021. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d\sqrt{T \log K})$ - “The regret upper bound is also stipulated to be of the order $\tilde{O}(d^{3/2}(1+\sqrt{\sum_{t=1}^T \sigma_t^2}))$, considering $R$ as a constant. Notably, in scenarios where the variance is constant, our LinNATS algorithm recovers the regret guarantee of $\tilde{O}(d^{3/2}\sqrt{T})$ for for linear TS (Agrawal and Goyal, 2013; Abeille and Lazaric, 2017)” Xu, R., Min, Y., & Wang, T. (2023). Noise-adaptive thompson sampling for linear contextual bandits. Advances in Neural Information Processing Systems, 36. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ --- Rebuttal 6: Title: Concerns about Reviewer QBvS's comments (Continued 3/3) Comment: - “we present the following corollary in linear bandits, whose main regret $\tilde{O}(d^{3/2}\sqrt{T})$ is **optimal for PS (posterior sampling) algorithms**.” Kuang, N. L., Yin, M., Wang, M., Wang, Y. X., & Ma, Y. (2023). Posterior sampling with delayed feedback for reinforcement learning with linear function approximation. Advances in Neural Information Processing Systems, 36, 6782-6824. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ - “We propose data-dependent perturbations … that allow EVILL to **match** the performance of Thompson-sampling-style parameter-perturbation methods.” “the regret of EVILL enjoys guarantees that **parallel** those available for Thompson sampling.” In the paper, their regret bound is $\tilde{O}(d^{3/2}\sqrt{T})” Janz, D., Liu, S., Ayoub, A., & Szepesvári, C. (2024). Exploration via linearly perturbed loss minimisation. In International Conference on Artificial Intelligence and Statistics (pp. 721-729). PMLR. => **Not** $min( \tilde{O}(d^{3/2}\sqrt{T}), \tilde{O}(d\sqrt{T \log K})$, but **only** $\tilde{O}(d^{3/2}\sqrt{T})$ The list goes on. Note that all of the papers above are NeurIPS, AISTATS, ICLR publications or equivalent. As you can see, to claim the regret matches the regret bound of LinTS, only one of them mostly $\tilde{O}(d^{3/2}\sqrt{T})$ is considered sufficient to claim to match the regret of LinTS. Against all these conventions, **Reviewer QBvS’s refusal to admit such a convention is not only harsh but also appears to be a deliberate devaluation of our work. Many bandit researchers would agree with this standard, which makes this stance particularly concerning**. In fact, $\tilde{O}(d^{3/2}\sqrt{T})$ regret bound serves as a gold standard for TS-based and randomized exploration algorithms that even Janz et al. [8] says the following: “the regret scales with $d^{5/2}\sqrt{T}$ up to logarithmic-in-T factors. The latter scaling is slightly worse than that obtained for Thompson sampling (cf. Theorem 17), where the regret scales with $d^{3/2}\sqrt{T}$, again, up to logarithmic-in-T factors.” Janz, D., Litvak, A. E., & Szepesvári, C. (2023). Ensemble sampling for linear bandits: small ensembles suffice. arXiv preprint arXiv:2311.08376. If $\tilde{O}(d^{3/2}\sqrt{T})$ is not a gold standard, why not compare with even tighter bounds such as minimax optimal regret for linear bandit $\tilde{O}(d\sqrt{T})$ in Janz et al. [8]? Reviewer QBvS’s logic and request are more than just demanding, becoming unnecessarily adversarial. Based on convention, we strongly believe that our result deserves credit. It is unreasonable to spend energy arguing about this widely accepted consensus. If the reviewer does not admit it, then they can take the face value: our result achieves $\tilde{O}(d^{3/2}\sqrt{T})$ for the first time for ensemble sampling. That is a fact. We sincerely hope that the reviewer and we can come to some common ground (we really do), although such hope is diminishing. --- Rebuttal 7: Title: Analysis of Ensemble Sampling: Balancing Computation and Regret Comment: ## **Motivation for Studying Ensemble Sampling** In the context of this discussion in linear function approximation, "exact Thompson Sampling" (TS) refers specifically to Linear Thompson Sampling (LinTS). Ensemble Sampling (ES) has emerged as a method to approximate TS, aiming to scale up exploration in complex environments where no conjugacy can be utilized by exact TS. The ultimate goal of scalable exploration is twofold: achieving bounded per-step computation complexity and sublinear regret in complex environment. While there is already extensive research on linear contextual bandits, including comprehensive analyses of TS in both Bayesian and Frequentist settings, I acknowledge that it is valuable to study the theoretical aspects of ES with linear function approximation. It's worth noting that linear Thompson sampling already offers bounded per-step computation and near-optimal regret. ## **Current Status of ES Theory and Paths for Advancement** Given the motivation behind ES and existing analyses, to further advance the community's understanding and potentially design better algorithms, we should consider: 1. **Regret Analysis**: Continue to refine either Bayesian or Frequentist regret bounds for ES. 2. **Ensemble Size Optimization**: Investigate how the choice of ensemble size affects both computational complexity and regret bounds. This aspect is crucial for practical implementations and should not be overlooked in theoretical analyses. 3. **Comparative Studies**: Conduct rigorous comparisons between ES and other methods (e.g., LinTS) across various regimes of problem parameters (such as dimension $d$ and number of arms $K$). 4. **Computational Complexity**: Analyze the per-step computation time of ES in relation to problem parameters and ensemble size. 5. **Empirical Validation**: Provide empirical evidence to support theoretical claims, especially regarding the scaling of ensemble size with problem parameters. 6. **Practical Implications**: Clearly articulate how theoretical improvements in ES translate to benefits in real-world applications, particularly in complex environments where exact TS is computationally infeasible. By addressing these aspects comprehensively, future research can provide a more holistic understanding of ES, potentially leading to algorithmic improvements that balance theoretical guarantees with practical applicability. **These relevant points emphasize again the previous concern on Ensemble Size Scaling** --- Rebuttal 8: Title: Problem-dependent Nature of Thompson Sampling Comment: # Problem-dependent Nature of Thompson Sampling Thompson Sampling (TS) has gained popularity in the bandit community and beyond due to its simplicity and inherent ability to adapt to various problem setups. This adaptability is evident in both Bayesian and Frequentist analyses of TS. ## Bayesian Perspective ### Information Ratio and Adaptive Bounds [Russo and Van Roy, 2016] introduced the concept of the information ratio, which demonstrates that **without modifying the TS policy specifically for each problem setups**, the information ratio automatically adapts to different problem setups. This led to a class of problem-dependent bounds for TS in their work and inspired many follow-up studies, including tight bound in finite $K$-arm setting, infinite arm setting and even changing action set setting. A notable extension of this approach is found in [Neu et al., 2022], which further develops the information-theoretic analysis of TS for contextual bandits in many problem setups. **Key Reference:** - Russo and Van Roy, 2016. Information Theoretic Analysis of Thompson Sampling - Neu, G., et al. (2022). Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits. - Hamidi, N., & Bayati, M. (2023). The elliptical potential lemma for general distributions with an application to linear thompson sampling. Operations Research, 71(4), 1434-1439. ## Frequentist Perspective ### Linear Thompson Sampling (LinTS) Analysis [Agrawal & Goyal, 2013] established a pioneering work in the analysis of frequentist regret for LinTS. Their comprehensive study covers both finite arm settings and infinite arm settings (compact action set). The elegance of their analysis lies in its problem-dependent nature, featuring a term $\min(\sqrt{d}, \sqrt{\log K})$ throughout the proof. This leads to their Theorem 1, which provides adaptive bounds **without modifying the LinTS algorithm specifically for each problem setups**. **Key Reference:** - Agrawal, S., & Goyal, N. (2013, May). Thompson sampling for contextual bandits with linear payoffs. In International Conference on Machine Learning (pp. 127-135). PMLR. ## Implications The problem-dependent nature of TS, as demonstrated in both Bayesian and Frequentist analyses, underscores its versatility and efficiency across various bandit problems. This adaptability contributes significantly to TS's popularity and effectiveness in real-world applications. **These facts emphasize my previous concern on the claim on matching regret performance of TS.** --- Rebuttal 9: Title: Seriously Misleading. Qin et al., 2023 studies Bayesian regret while our results are all in frequentist regret. Comment: Dear Reviewer QBvS, There is another serious issue in your argument (https://openreview.net/forum?id=6SSzMq3WTn&noteId=TYYoKwGxDU). You are quoting **Qin et al., 2023's Bayesian regret bound to argue that it is sharper than our frequentist regret bound**. **This is highly concerning**, as Bayesian regret and frequentist regret are two distinct measures and are not directly comparable. **This is just simply WRONG**. We believe you understand that Bayesian regret is generally considered a weaker notion than frequentist regret. Our manuscript and rebuttal clearly state that the results in Qin et al., 2023 are in the Bayesian regret setting. Note that throughout all our paper and discussions (even the quotes on LinTS from the literature) the notion of regret we study are all in frequentist regret setting. **Sadly, this discrepancy suggests either a serious misunderstanding or a deliberate misrepresentation of our work, which is very troubling**. Reviewer QBvS, we believe that you have experienced being an author in submitted papers. Many in our community have faced similar challenges, where reviews do not accurately reflect the value of their work. This is particularly problematic for theoretical papers, where significant effort and rigorous results can be easily overlooked due to ignorance or mistakes. Lastly, we respectfully and sincerely urge you to reconsider your evaluation of our work and ensure that it is judged fairly based on its theoretical contributions and merits.
Rebuttal 1: Rebuttal: Although our main contribution is providing a tighter regret bound for linear ensemble sampling, we have performed experiments as per Reviewer QBvS's request. The results are shown in the attachment. We strongly believe that our work should be evaluated on its theoretical merit. Our improved regret bound for ensemble sampling matches LinTS for the first time, and our novel analysis should be sufficient for recognition. Furthermore, our regret analysis framework can be generalized to analyze other randomized algorithms such as LinPHE (and derive new regret bounds), which is always a plus, regardless of one's view on LinPHE and ensemble sampling. Pdf: /pdf/2a395fb067ebdfeebad8a3595e63e9456a8c3b25.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
Accept (poster)
Summary: This paper studies federated learning under heterogeneous data domains and introduces a federated prototype learning strategy, denoted as FedPLVM, to mitigate the problem. A dual-level prototype generation method is proposed to address domain variance between hard and easy domains, reducing the communication burden between clients and the server. Moreover, an \alpha-sparsity prototype loss is proposed to enhance the sparsity of the prototype representations. The experiment results demonstrate the efficacy of the proposed method and the necessity of each proposed module. Strengths: 1. The paper is overall well-written and easy to follow. 2. The authors explain why models exhibit varying performance across domains. 3. The experiment results show the superiority of the proposed method compared to SOTA methods. The ablation results also indicate the effectiveness of each proposed module. Weaknesses: 1. The proposed dual-level clustering strategy appears similar to FPL. What is the key difference between them? The primary distinction lies in the proposed \alpha-sparsity loss, which introduces incremental innovation to the paper. 2. The proposed dual-level prototype strategy utilizes FINCH for clustering. However, the clustering results from FINCH can vary at different steps. Which step's result is selected for use? Do results from different steps vary? 3. The impact of \alpha is only evaluated on small-scale datasets. Does the trend consist with big-scale datasets like DomainNet? 4. In Sec. G, the authors claim that the evaluation protocol used in FPL is unfair. However, the protocol used in FPL is sensitive to both easy and hard datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: This paper is well-organized and the experiment results significantly surpass existing methods. But the level of innovation is questionable: the dual-level prototype generation is similar to FPL, and the only key difference is the \alpha-sparsity prototype loss. Moreover, I also have some concerns on this paper. For more details, please check the weakness part. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. The authors addressed limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. Here is our response to the mentioned weaknesses: 1. The key point of our dual-level clustering is local prototypes clustering. Different from easy domains, hard domains often exhibit looser clustering, increasing the risk of misclassification, especially for samples near category boundaries. Our dual-level prototypes clustering captures more feature distribution variation information by introducing the local prototypes clustering operation, since clustering locally provides more prototypes compared to simply averaging especially considering the sparse distribution in hard domains as shown in Figure 3 of our paper. To demonstrate this, we refer to Figure 1 of the attached pdf file. The y-axis shows the average number of local prototypes among classes generated at each selected round. Note that the number of prototypes for different classes can vary, resulting in non-integer averages. The easy domain (MNIST) has fewer prototypes generated compared to the hard domain (SVHN), showing that in hard domains, more prototypes are utilized to better capture the variance information. Furthermore, an average performance gain of 3.08% in experiment of impact of local prototypes clustering in Table 2 of our paper also supports this observation. At the global level, our clustering aims to reduce communication overhead and privacy risks by limiting the prototype variety each client sends, enhancing both efficiency and privacy while FPL intends to balance the impact of local prototypes from each domain, involving only server-level clustering. Meanwhile, our proposed α-sparsity prototype loss is also innovated and designed to complement the dual-level clustered prototypes operation. This loss function origins from reducing the potential risk of feature representation overlapping caused by our dual-level clustered prototypes operation. This loss function reasonably utilizes the variance information to reduce feature similarity across different classes while increasing it within the same class, promoting more effective and stable learning. 2. For each clustering operation, the installed FINCH python project provides several clustering results and we select the one with the smallest number of clustering prototypes. During different rounds, the number of local clustered prototypes vary for different classes among different clients, generally decreasing as training progresses. From Figure 1 of the attached pdf file, we observe that the average number of clustered local prototypes consistently decreases over the training process. This indicates that our local prototypes clustering method aligns with the convergence process among different domains. 3. We further conduct the ablation study of \alpha on the DomainNet dataset as shown in Figure 2 of the attaced pdf file. Our extensive experiments indicate that maintaining $\alpha$ within the stable range of 0.125 to 0.25 consistently yields the best performance. Although the results vary with different values of $\alpha$, our method consistently outperforms all baseline methods, demonstrating its robustness. 4. In FPL, the client domain distribution is unbalanced, with data ownership distributed among 20 clients as follows: MNIST (3 clients), USPS (7 clients), SVHN (6 clients), and SYN (4 clients). However, the reported final average accuracy is computed by summing the results from all clients and dividing by 20, which gives more weight to domains with more clients, leading to an unfair representation. In Section G, we address this issue by first averaging the results within each domain based on the number of clients in that domain. We then average these domain-specific results based on the number of domains. This weighted averaging ensures that all domains are given equal weight, regardless of the number of clients they have. For Question 1, please refer to our response to Weakness 1. --- Rebuttal Comment 1.1: Comment: Thank you for your response, some of my concerns are solved. However, I still do not find a distinction between dual-cluster and FPL. And for R4, the weighted average could be another valuable evaluation protocol but I do not think the protocol used in FPL and other methods is unfair. Thus, I would like to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your continued feedback. We appreciate the opportunity to further clarify our approach. 1. We would like to further clarify the distinctions between our method and FPL. Our dual-level prototype clustering significantly differs from the single-level global prototype clustering FPL, both in purpose and effectiveness, as we detailed in our previous response and demonstrated in the visualization results of our paper. The local prototype clustering is crucial because it captures essential variance information, rather than just representing the average feature as in FPL. This is particularly important in challenging domains. For instance, as shown in Figure 1 of the attached PDF, the easier domain (MNIST) generates noticeably fewer prototypes compared to the harder domain (SVHN), which underscores the necessity of our local prototype clustering approach. Moreover, our proposed $\alpha$-sparsity loss introduces an innovative and effective enhancement to our dual-level prototype clustering. We conducted additional experiments to highlight the advantages of the $\alpha$--sparsity loss in terms of both convergence speed and final accuracy. In the Digit-5 experiment, we performed 100 global rounds as stated in our paper and compared the convergence speed with and without the $\alpha$-sparsity loss. The results, which report the average test accuracy after every 10 rounds, clearly show that the $\alpha$--sparsity loss leads to faster convergence (achieving it in 70 rounds compared to 80 rounds without the proposed loss) and higher final accuracy. | Rounds | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | |:-----------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | w/o \(\alpha\)-sparsity loss | 35.14 | 56.44 | 57.58 | 62.20 | 64.90 | 65.72 | 65.02 | 66.66 | 66.32 | 66.06 | | w/ \(\alpha\)-sparsity loss | 54.12 | 62.44 | 65.32 | 66.12 | 67.32 | 68.20 | 69.62 | 69.22 | 69.86 | 69.26 | 2. We appreciate your insightful comment. Following your suggestion, we have calculated the additional average accuracy using the protocol in FPL. Our updated results show that the average accuracy of our proposed method is 77.07%, compared to 75.54% for FPL. This further demonstrates the superior performance of our method.
Summary: This paper focuses on Federated Prototypes Learning and reveals that existing methods create the same prototypes for different clients, which neglects the distribution diversity. In this work, the authors introduce the variance-aware dual-level prototype clustering and alpha sparsity prototype loss. Various experiments demonstrate the effectiveness of the proposed method. Strengths: Authors conduct comprehensive experiments to demonstrate the effectiveness. Weaknesses: This paper has several drawbacks. 1. Motivation for the α-Sparsity Prototype Loss. It is a little strange that "This multiplicity of prototypes could potentially lead to overlapping feature representations among different classes, especially in challenging client scenarios". As the authors claim the feature overlapping, the naive solution is to use other ways to cover the local feature behavior rather than multiple local prototypes. Furthermore, I assume that authors could employ the OT (optimal transport) to require them to concentrate on different parts. I do not think the current operation is a suitable way to deal with the feature overlapping. 2. The paper architecture is not suitable. I wonder why authors spend a lot of space to write meaningless or complicated words and even leave no space for DomainNet and Office-10. Furthermore, I encourage to consider the label skew effect, ie, with different label skew degree. Besides, the experiments on large scale of clients are also important. 3. The Figure 1 is confusing. Why monitor would capture the digitis figure? It is a not soundable problem figure. Technical Quality: 3 Clarity: 2 Questions for Authors: I encourage the author to take careful thinking for the federated prototype learning field. The existing solution shares a high similarity with the FPL (CVPR'23) and seems like an incremental work. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes, the authors have discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. Here is our response to the mentioned weaknesses: 1. We introduce \alpha-sparsity loss to mitigate the potential risk of feature representation overlapping caused by the dual-level clustered prototypes operation. This loss focuses on maximizing inter-class distance and putting more attention on expanding the overall feature distribution to a broader range, thereby solving the feature overlapping problem efficiently and simply. While Optimal Transport (OT) may be employed in our framework, it introduces additional significant computational complexity [1]. Given our experimental results, it is effective to use our proposed \alpha-sparsity loss due to its simplicity. [1]. Peyré G, Cuturi M. Computational optimal transport: With applications to data science[J]. Foundations and Trends® in Machine Learning, 2019, 11(5-6): 355-607. 2. We apologize for any confusion caused by our paper's structure. Could you please provide us with a range of 'meaningless' and 'complicated' words so we can revise them accordingly? Moreover, we can move the DomainNet and Officie-10 parts from the Appendix to the main text per your suggestion in our final version. Regarding the label skew effect, we conduct label non-IID experiments using the Dirichlet method with a distribution parameter of 0.5, as detailed in Section E. The label distribution becomes more non-IID as the distribution parameter decreases. To illustrate the impact of different distribution parameters (0.1, 0.5, 1, 5, and 10), we compare the average accuracy with the best baseline method FPL from our previous non-IID experiment on Digit-5. Figure 3 of the attached pdf file shows that our method consistently outperforms FPL across all distribution parameters. However, the performance gap narrows as the datasets become more non-IID. This trend is expected, as in non-IID datasets, the number of samples for some classes can be very small, leading to less representative feature representations. Consequently, our clustered local prototypes may become similar to the averaged single prototype used in FPL. We also conduct the experiments on a larger scale of 20 clients, the same number in FPL, and the datasets of per four clients come from the same domain. As shown in Table 1 of the attached pdf file, our method still outperforms FPL under such a setting. Note we have already explored the unbalanced clients distribution setting of 10 clients in Section G. Question 1: Firstly, while both our method and FPL employ prototype clustering, the objectives and implementations are markedly different. FPL performs single-level global clustering, intended to balance the impact of local prototypes from each domain, involving only server-level clustering. This single-level global clustering is specifically designed to address imbalances in client distribution across domains, aiming to neutralize the skewed influence of domains with more clients on global model training. Conversely, our method integrates dual-level clustering at both the client (local) and server (global) levels. At the global level, our clustering aims to reduce communication overhead and privacy risks by limiting the prototype variety each client sends, enhancing both efficiency and privacy. Our local prototype clustering is crucial because it captures essential variance information, not just the average data, which is particularly crucial in hard domains. hard domains often exhibit looser clustering, increasing the risk of misclassification, especially for samples near category boundaries. Therefore, only capturing average local prototype suffices for easy domains but falls short for hard domains. To demonstrate this, we refer to Figure 1 of the attached pdf file. The y-axis shows the average number of local prototypes among classes generated at each selected round. The easy domain (MNIST) has fewer prototypes generated compared to the hard domain (SVHN), showing that in hard domains, more prototypes are utilized to better capture the variance information. Furthermore, an average performance gain of 3.08% in experiment of impact of local prototypes clustering in Table 2 of our paper also supports this observation. Secondly, we introduce an innovative $\alpha$-sparsity prototype loss featuring a corrective component, which mitigates the potential risk of feature representation overlapping caused by the dual-level clustered prototypes operation.. This loss function reasonably utilizes the variance information to reduce feature similarity across different classes while increasing it within the same class, promoting more effective and stable learning. --- Rebuttal 2: Comment: Thank you for the AC tips. I have provided more details in my response to the review. As for the details of the review, 1. The naive solution is to utilize a Gaussian distribution to describe class information. For example, FedFA [1] is based on Gaussian modeling of feature statistic augmentation. I encourage the authors to compare the conceptual differences between local prototype clustering and class Gaussian construction. Additionally, the authors should clearly explain the advantages of utilizing a dual-level prototype approach. 2. With respect to paper organization, the authors should spend more space on concept comparison with existing methods. But the authors focus on preliminary discussion. Thank you for authors feedback. I still have the following questions: 1. You refer to a paper from 2019, but many papers related to Optimal Transport (OT) have been published in recent years. For instance, FedOTP [2] introduces the OT loss in regularization. 2. The authors did not address question 3: “Figure 1 is confusing. Why would a monitor capture the digits? It does not seem like a logical problem illustration.” 3. You mentioned that “This loss function reasonably utilizes the variance information to reduce feature similarity across different classes while increasing it within the same class, promoting more effective and stable learning.” Could the authors provide the corresponding theoretical or convergence analysis? Additionally, regarding “which mitigates the potential risk of feature representation overlapping,” could the authors offer visualizations to support this claim rather than relying on terms like “potential”? [1] FedFA: Federated Feature Augmentation [2] Global and Local Prompts Cooperation via Optimal Transport for Federated Learning. CVPR 2024 --- Rebuttal Comment 2.1: Comment: Thank you for your further comments. For the details of the review: 1. FedFA aims to augment the feature statistic by constructing a vicinity Gaussian distribution for each data point and change the empirical risk to the vicinal risk. Such an augmentation operation can enrich the feature information after the feature extractor layer. However, this approach has its limitations, especially in cases where data volume is small or distribution is uneven. In these scenarios, the constructed Gaussian distribution might not fully capture the true diversity of the data and can have bias. Also, it requires additional computation regarding the distribution parameters locally for each feature extractor layer with augmentation. In contrast, our proposed FedPLVM utilizes only the feature representation of the final feature extractor layer to cluster several local prototypes regardless of the even or uneven feature distribution and further performs global clustering and combines with the $\alpha$-sparsity loss in the local training. Lastly, FedFa is one of the baselines we compared in our paper and the result shows our method outperforms FedFA in Digit-5, Office-10 and DomainNet. 2. We appreciate your suggestions regarding the organization of our paper. However, we must respectfully disagree with the comment that excessive space is devoted to preliminary discussions at the expense of comparisons with existing methods. Firstly, the preliminary section occupies only half a page and concisely presents essential knowledge required for understanding our proposed method. This information is crucial and cannot be further condensed without loss of clarity. Secondly, we have integrated comparisons with existing methods throughout the introduction and methodology sections. Specifically, our comparison with FPL, which is most relevant to our work, includes dedicated paragraphs discussing the differences (see lines 232-247). For Questions: 1. We appreciate for suggesting the paper on FedOTP. While FedOTP is indeed a noteworthy contribution, we believe it addresses a different aspect from that of our paper. FedOTP focuses on developing efficient collaborative prompt learning strategies to integrate pre-trained visual-language models into federated learning frameworks. This topic, although valuable, is not closely related to the core focus of our research. 2. The icon in Figure 1 of our paper mentioned in your question is not that of a monitor but a laptop. In FL, the client device doesn’t necessarily need to capture the data itself. In our illustration, the laptop processes the stored digital data without needing to capture figures or photos directly. 3. We thank you for your advice on including a convergence speed analysis. We agree that theoretical convergence analysis is a crucial aspect of federated learning. Typically, such analyses in existing FL research depend heavily on strong assumptions about gradient bounds and the differences between global and local loss functions. Importantly, incorporating prototype learning alters the loss function itself. Since FL convergence analyses require the specific assumptions regarding the loss functions, the advantages of integrating prototype learning into the loss function might not be readily apparent. This limitation is also why existing FL studies that utilize prototype learning, such as FPL, do not provide theoretical convergence analysis. Instead, to address concerns regarding convergence speed, we have provided experimental results that clearly demonstrate the improvements our method achieves. In our Digit-5 experiment, we performed 100 global rounds as stated in our paper and compared the convergence speed with and without our proposed $\alpha$-sparsity loss. We reported the average test accuracy after each 10 rounds training. Obviously with the $\alpha$-sparsity loss we have a faster convergence speed (70 rounds compared to 80 rounds without the proposed loss function) and a higher converged accuracy: | Round | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | w/o $\alpha$-sparsity loss | 35.14 | 56.44 | 57.58 | 62.20 | 64.90 | 65.72 | 65.02 | 66.66 | 66.32 | 66.06 | | w/ $\alpha$-sparsity loss | 54.12 | 62.44 | 65.32 | 66.12 | 67.32 | 68.20 | 69.62 | 69.22 | 69.86 | 69.26 | 4. We appreciate your suggestion to include visualizations to support our claims. However, during the author-reviewer discussion period, we are unable to add new figures. Should our paper be accepted, we will certainly include the recommended visualizations in the final version of the manuscript as advised.
Summary: This paper aims to investigate the federated prototype learning problem with data heterogeneity. To handle the cross-domain representation variance problem, a new method termed FedPLM is proposed, which includes a dual-level prototype clustering mechanism and an alpha-sparsity prototype loss. Experiments are conducted to show the effectiveness of the proposed method. Strengths: 1. The proposed method is well-designed for the specific research problem and has good novelty. 2. The motivation of the paper is solid. 3. The experiments are comprehensive to discuss the properties of the proposed method. 4. The paper is well-written and easy to follow. Weaknesses: 1. On several datasets (e.g. MNIST and USPS), the performance of the proposed method is not significant enough. Can the author give more explanation for this point? 2. The running efficiency of the proposed method is not given. Complexity analysis and efficiency-related experiments are expected to discuss the efficiency of the proposed method. Technical Quality: 3 Clarity: 4 Questions for Authors: The authors are expected to address the concerns in the block of "Weaknesses". Also, I have one more question: How to balance the two sub-terms (contra and corr) in alpha loss? As they may be on different scales, is that feasible to directly add them together? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed in the paper. For the negative societal impact, I didn't find any concern from my side. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. Here is our response to the mentioned weaknesses: 1. Our design principle aims to ensure our method excels particularly in challenging domains, which often have lower baseline accuracy, such as SVHN. This aligns with our motivation to tackle diverse learning challenges across various domains. In more challenging domains, the dual-level clustering method generates multiple clustered local prototypes that capture variance information effectively, thus enhancing focus and fairness, as shown in Figure 1 of the attached pdf file. For easier datasets like MNIST, where feature representations naturally concentrate, our local clustering tends to produce a centralized prototype similar to the traditional average method, leading to a smaller performance gap. We focus on hard domains beacuse in real-world settings, difficult domains are common and pose significant challenges that need effective solutions. Previous works often struggle in these hard domains, and our method addresses this gap by ensuring robust performance improvements where they are most needed. Contrary to the concern about insignificant results, our method does show considerable improvements in challenging domains. For instance, on the SVHN, Synth and MNIST-M, our approach achieved a notable performance gain of 5.3%, 2.72% and 3.14% respectively compared to SOTA methods. This demonstrates the effectiveness and robustness of our method in handling difficult datasets, which are critical for practical applications. 2. For the computation cost of FedPLVM, the feature representations of data samples can be obtained directly from the model without any additional computation. Our method introduces the dual-level prototypes clustering compared to other baselines, and we test the average running time for local training (0.0296s), local prototypes clustering (0.552s) and global prototypes clustering (0.0214s). Compared to the local training, our additional dual-level clustering step is negligible (approximately 0.0296 / 0.552 ≈ 5% of the local training time). Meanwhile, the proposed \alpha-sparsity loss does not change the network structure. For the communication cost of FedPLVM, we need each client to upload local clustered prototypes with the local model to the server and the client will then download the global clustered prototypes with the global model for the local training of the next round. Our modified lightweight ResNet-10 has approximately 4.9 million parameters. In our network, one prototype is only a 512-dimension vector, so the maximum number of total uploaded parameters for the local clustered prototypes can be approximately 3.2 (from the highest point of the red line in the above picture ) * 10 (number of classes) * 512 = 0.016 million and the estimated value of the downloaded parameters for the global clustered prototypes can be 21.20 (from table 3 in the paper) * 512 = 0.011 million which are both ignorable compared to the size of the model. Note the previous work FedPCL requires each client download the prototypes from all other clients from the server, which makes our clustering prototypes operation more realizable. Question 1: A straightforward approach to balance the two sub-terms is introducing another hyper-parameter to control the weight. However, to maintain the simplicity of our proposed \alpha loss, we can adjust the temperature $\tau$ in the contrastive term to balance the two sub-terms. In our experiments, using our default $\tau$, the average loss value of the first term in the 50th round out of 100 is 0.364, while the second term is 0.105, demonstrating comparable values.
Summary: The paper introduces FedPLVM, a federated learning approach that improves federated prototype learning (FedPL) in heterogeneous data domain setting. Traditional FedPL methods create the same number of prototypes for each client, leading to performance disparities across clients with different data distributions. FedPLVM mitigates this by introducing variance-aware dual-level prototype clustering and an α-sparsity prototype loss. Specifically, each client first clusters the feature vectors of all same-class local samples to get local clustered prototypes. Then, the server conducts global prototype clustering to reduce the number of prototypes for each class. Experiments show that FedPLVM outperforms the other baselines on the data from multiple domains. Strengths: 1. The motivation is clear. It makes sense to get different prototypes under tasks with different difficulties. 2. The proposed \alpha-sparsity prototype loss is interesting. The ablation study about the loss and dual-level prototype generation is clear. Weaknesses: 1. The paper misses important baselines. Personalized federated learning studies are not studied, which should be suitable in the studied setting. 2. The computation and communication cost of FedPLVM compared with other studies is not presented. 3. The clustering method needs more details. It is not clear why clustering can address the heterogeneous domain issue. 4. The work is incremental work based on FedPL. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. From Algorithm 1, the goal of FedPLVM is to train a single global model. From the experiments, the model is tested on the local datasets. Given data from different domains, why don’t the parties train personalized local models with personalized federated learning? 2. The paper mentioned that more prototypes are needed for harder domains. For the clustering method, is there a guarantee that the harder domain will also get more local prototypes? The paper may add more analysis or experiments to demonstrate it. 3. Will FedPL have significant higher computation and communication cost than FedAVg? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. Here is our response to the mentioned weaknesses: 1. Firstly, we want to highlight that the main objective of this paper is to develop a global model that works across all domain datasets. This approach aligns with the baseline methods, such as FedProto and FPL, which also address variations in client data domains. Secondly, training a unified global model, as opposed to personalized models, offers distinct advantages. For instance, a potential use case involves applying the trained model to broader scenarios, such as when a client has data from multiple domains and the domain of each data sample is unknown. In such cases, a global model remains effective, whereas personalized models may not be suitable. Lastly, developing a strong global model enhances our ability to implement personalization effectively. While our method can integrate personalization techniques, it is not the current focus of our work. We plan to explore personalization in more depth in our future research. 2. For the computation cost of FedPLVM, the feature representations of data samples can be obtained directly from the model without any additional computation. Our method introduces the dual-level prototypes clustering compared to other baselines, and we test the average running time for local training (0.0296s), local prototypes clustering (0.552s) and global prototypes clustering (0.0214s). The additional local prototypes clustering in dual-level clustering step is negligible, accounting for only about 0.0296 / 0.552 ≈ 5% of the local training time. Meanwhile, the proposed \alpha-sparsity loss does not change the network structure. For the communication cost of FedPLVM, each client uploads local clustered prototypes along with the local model to the server. The client then downloads the global clustered prototypes and the global model for the next round of local training. Our employed ResNet-10 has approximately 4.9 million parameters. Each prototype is a 512-dimension vector, so the maximum number of uploaded parameters for the local clustered prototypes is approximately 3.2 (from the highest average number of local clustered prototypes of the red line in Figure 1 of the attached pdf file) * 10 (number of classes) * 512 = 0.016 million and the estimated value of the downloaded parameters for the global clustered prototypes can be 21.20 (from table 3 in the paper) * 512 = 0.011 million. Both are negligible compared to the model size. Notably, the previous work FedPCL requires each client to download the prototypes from all other clients via the server, making our clustering prototypes operation more efficient and feasible. 3. Local prototype clustering is beneficial because it captures essential variance information, not just the average data, which is particularly crucial in hard domains. Easy domains tend to show tight clustering within the same category and clear distinctions between different categories, facilitating accurate classification. In contrast, hard domains often exhibit looser clustering, increasing the risk of misclassification, especially for samples near category boundaries. Therefore, only capturing average data suffices for easy domains but falls short for hard domains. To demonstrate this, we refer to Figure 1 of the attached pdf file. The y-axis shows the average number of local prototypes among classes generated at each selected round. The easy domain (MNIST) has fewer prototypes generated compared to the hard domain (SVHN), showing that in hard domains, more prototypes are utilized to better capture the variance information. Furthermore, an average performance gain of 3.08% in experiment of impact of local prototypes clustering in Table 2 of our paper also supports this observation. 4. We respectfully disagree. There are a few key distinctions between our work and FedPL. Firstly, while both our method and FPL employ prototype clustering, the objectives and implementations are markedly different. FPL performs single-level global clustering, intended to balance the impact of local prototypes from each domain, involving only server-level clustering. This single-level global clustering is specifically designed to address imbalances in client distribution across domains, aiming to neutralize the skewed influence of domains with more clients on global model training. Conversely, our method integrates dual-level clustering at both the client (local) and server (global) levels. At the global level, our clustering aims to reduce communication overhead and privacy risks by limiting the prototype variety each client sends, enhancing both efficiency and privacy. Our local prototype clustering is crucial because it captures essential variance information, not just the average data, which is particularly crucial in hard domains. Hard domains often exhibit looser clustering, increasing the risk of misclassification, especially for samples near category boundaries. Therefore, only capturing average local prototype suffices for easy domains but falls short for hard domains. Secondly, we introduce an innovative α-sparsity prototype loss, highlighted as “interesting” in the strengths part of your review, featuring a corrective component, which mitigates the potential risk of feature representation overlapping caused by the dual-level clustered prototypes operation. This loss function reasonably utilizes the variance information to reduce feature similarity across different classes while increasing it within the same class, promoting more effective and stable learning. For Question 1, 2 and 3, please refer to our response to Weakness 1, 3 and 2 respectively. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. Some of my concerns have been addressed. I'll increase my score to 5.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their insightful feedback and recognition of our work. We particularly appreciate Reviewer GDmh’s comment on the well-designed of our proposed $\alpha$-sparsity prototype loss, Reviewer dkgn's comment on the novelty of our proposed dual-level prototypes clustering method and the other reviewers’ affirmation that our experiments are comprehensive and validate our proposed method effectively. We also value the feedback pointing out areas of weakness and the questions raised about our paper. We prepare the separate respective response to each reviewer about their specific concerns. Below, we first provide a consolidated response addressing the main concerns raised by multiple reviewers: 1. Benefit from Local Prototype Clustering: Local prototype clustering effectively captures essential variance information, not just averages, which is vital in complex domains. Our method generates more prototypes for challenging domains to better capture this variability, while fewer prototypes are sufficient in simpler domains where averages provide enough detail. This strategy is depicted in Figure 1 of the attached pdf file, which shows fewer prototypes in the easier domain (MNIST) and more in the harder domain (SVHN), highlighting the necessity for more prototypes in complex scenarios to adequately capture variance. 2. Computation and Communication Costs: - Computation: Our approach introduces dual-level prototypes clustering, adding a minimal overhead—only about 5% of the total local training time—compared to other baselines. - Communication: Each client uploads only the extra local clustered prototypes to the server. Given the size of the large local model (4.9 million parameters), the additional prototypes (approximately 0.01 million parameters) are negligible. 3. Distinction from FPL: Our objectives and implementations differ significantly from FPL. FPL employs single-level global clustering, aiming to harmonize local prototypes’ impact across domains, involving only server-level clustering. In contrast, our approach addresses the varying learning challenges across domains, utilizing dual-level clustering at both the client (local) and server (global) levels. At the global level, our clustering strategy reduces communication overhead and privacy risks by limiting the variety of prototypes each client sends, thus enhancing both efficiency and privacy. Our local prototype clustering is crucial in complex domains, where it captures essential variance information, critical for reducing misclassification risks near category boundaries. This is not necessary in easier domains where capturing average data suffices. Additionally, we introduce the innovative $\alpha$-sparsity prototype loss with a corrective component that mitigates the risk of feature representation overlap caused by dual-level clustered prototypes. This loss function effectively utilizes variance information to decrease feature similarity across different classes while increasing it within the same class, promoting more effective and stable learning. Pdf: /pdf/99a5627cedba151d9ce9e56a640f1a08df682365.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The domain gap among multiple clients impedes the generalization of federated learning. To mitigate cross-domain feature representation variance, the authors introduce FedPLVM, which establishes variance-aware dual-level prototypes clustering and employs a novel α-sparsity prototype loss. To verify the effectiveness of the proposed method, extensive experiments are conducted. Strengths: 1. This paper is well organized and written in a way that is easy to understand. 2. The experimental design is reasonable and a large number of experiments have also proved the effectiveness of the proposed methodology. Weaknesses: 1. It makes sense to perform global level prototype clustering on the server side, while performing intra-class prototype clustering on the local side makes unclear sense, why it is advantageous to get more prototypes for hard domain samples? 2. In Eq. (8), what is meaning of C_y? 3. From Fig. 4, \alpha has a significant impact on the results, which is not conducive to the robustness of the algorithm. 4. The latest compared method in the paper is from 2023 and lacks the latest comparison method. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. Here is our response to the mentioned weaknesses: 1. Local prototype clustering is beneficial because it captures essential variance information, not just the average data, which is particularly crucial in hard domains. Easy domains tend to show tight clustering within the same category and clear distinctions between different categories, facilitating accurate classification. In contrast, hard domains often exhibit looser clustering, increasing the risk of misclassification, especially for samples near category boundaries. Therefore, only capturing average data suffices for easy domains but falls short for hard domains. Our proposed method captures more feature distribution variation information by introducing the local prototypes clustering, since clustering provides more prototypes compared to simply averaging especially considering the sparse distribution in hard domains as shown in Figure 3 of our paper. For easy domains, where the average is sufficient, our method generates fewer prototypes. To demonstrate this, we refer to Figure 1 of the attached pdf file. The y-axis shows the average number of local prototypes among classes generated at each selected round. The easy domain (MNIST) has fewer prototypes generated compared to the hard domain (SVHN), showing that in hard domains, more prototypes are utilized to better capture the variance information. Furthermore, an average performance gain of 3.08% in experiment of impact of local prototypes clustering in Table 2 of our paper also supports this observation. 2. $C_y$ is the number of clustered prototypes for the class $y$. 3. Firstly, our extensive experiments indicate that maintaining $\alpha$ within the stable range of 0.125 to 0.25 consistently yields the best performance. We further conduct the ablation study on the large-scale Domain dataset. As shown in Figure 2 of the attached pdf file, we can still observe a performance benefit from the stable range. Secondly, it's important to note that $\alpha$ is a crucial element in our proposed $\alpha$-sparsity loss. If $\alpha$ is set to 1, our $\alpha$-sparsity loss reduces to a standard contrastive loss, although it still includes a specially designed corrective term (and also the proposed dual-level prototypes clustering operation). Therefore, it's natural for $\alpha$ to impact the results. Lastly, in our ablation study (refer to Figure 4 in the main paper), although the results vary with $\alpha$, our method consistently outperforms all baseline methods, demonstrating its robustness. 4. To the best of our knowledge, there is still no newer work under the same setting. If you are aware of any such studies, please let us know, and we will compare our work with them. Furthermore, there is one recent fair federated learning work, FedHeal [1], which compares with other fair federated learning methods by evaluating whether they can work well with some baseline methods used in our work, such as FedAVG and FedProto and bring any benefits. Although it operates under different setting compared to ours and does not compare with any baseline works in our setting, we conducted experiments to demonstrate that our proposed framework performs better than the SOTA method FPL + FedHeal in [1]. From Table 1 of the attached pdf file, we can observe that while FPL gains a performance boost from FedHeal, our proposed method still outperforms it. [1]. Chen Y, Huang W, Ye M. Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 12077-12086. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. In light of the novelty and other reviewers' comments, I tend to maintain my score.
null
null
null
null
null
null
Scale-invariant Optimal Sampling for Rare-events Data and Sparse Models
Accept (poster)
Summary: This paper studies the problem of learning a sparse model under rare events and scale-invariance. They provide optimal subsampling function to handle scale-invariance for a Lasso-regularized model. They finally provide experimental results for their subsampling scheme. Strengths: * Well-written * Looking at MSPE rather than asymptotic variance seems like a good way to get around the scaling issue. Weaknesses: * This problem setting feels very specific: rare events data, sparse models, and scale-invariance. It would be good to motivate why we should care about all of these simultaneously. * None of the theoretical results given in the main body seem to have any discussion about proof strategy or novel techniques. It would be interesting to know what technical challenges were posed in developing these results, as well as how they were resolved. * In section 4, the paper compares the proposed penalized MSCL estimator to the MSCL estimator. As far as I can tell, this MSCL estimator has not been defined in the paper and so these comparisons and discussions are hard to make sense of. Technical Quality: 3 Clarity: 3 Questions for Authors: * In Proposition 1, why is minimizing $tr(M_{w(A)})$ of interest? It seems like all one would care about is minimizing the asymptotic variance (which as far as I understand is $V_{w(A)}$). * In the preamble of section 3.1 (lines 156-158), the MSPE is motivated by the desire for a scale-invariant objective. Is the problem with the previous approach(es) that scaling $x_j$ would inversely scale the corresponding entries of $V_{w(A)}$? * In the preamble of section 4 (lines 183 and 184) that discusses the inefficiency of the IPW estimator, the paper says IPW gives less weight to more informative points. Is this a formal (or at least, formalizable) claim, or is it just a heuristic? Either way, can this be elaborated upon? * The optimality of this work seems to be the optimal subsampling function towards the objective of MSPE. Is it possible to argue that this estimator is information-theoretically the best that one can do? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review of our paper. We would like to point out that rare events data and sparse models are common in practice, and scale-dependence is a frequent and crucial issue in subsampling that has not been addressed in existing literature. Following your great suggestion, we will provide more details on the motivation behind our paper's settings in the revised version. We did not compare the penalized MSCL estimator to the MSCL estimator. The penalized MSCL estimator is defined in equation (9), while the MSCL estimator is simply this without the last penalty term. We did not separately define the MSCL as it is not implemented in our paper. We will clarify this in the final version. Furthermore, your comment has made us realize that using the subscript "mscl" instead of "lik" would more clearly indicate the penalized MSCL estimator. We will make this change in the revision. Please see the following for our response to your specific comments/questions. **Q1. The reason why minimizing $tr(M_{w(A)})$ is of interest in Proposition 1.** **R1.** Minimizing $tr(M_{w(A)})$ is of interest because it is the asymptotic variance of estimating $M_{(A)}\theta_{t(A)}$, a linearly transformed parameter. One advantage of this criterion is it often leads to optimal subsampling probabilities that are easier to calculate. We will provide further clarification on this point in the final revision. **Q2. The problem of previous approaches.** **R2.** Thank you for your insightful question. Yes, the problem with previous approaches is essentially caused by the fact that scaling $x_j$ would scale some entries of $V_{w(A)}$. However, the resulting effects on the optimal probabilities are more complex than might be expected. For example, Figure 1 demonstrates that scaling a single covariate has markedly different effects on the A-OS and L-OS probabilities. **Q3. Discussion of the inefficiency of the IPW estimator.** **R3.** The statement that the IPW estimator is not the most efficient due to its assignment of lower weights to more informative data points is a heuristic explanation for its inefficiency. More specifically, optimal sampling probabilities are constructed so that more informative data points have higher probabilities $\pi(\mathbf{x}_i^{\mathrm{sub}}, y_i^{\mathrm{sub}})$'s of being included in the subsample. However, the IPW estimator assigns smaller weights of $1/\pi(\mathbf{x}_i^{\mathrm{sub}}, y_i^{\mathrm{sub}})$'s. This inverse relationship suggests that the IPW estimator does not fully utilize the information contained in the optimal subsample, which intuitively explains its reduced efficiency. Furthermore, we can also formally prove that the IPW estimator is less efficient than the penalized MSCL estimator. Here is a sketch of the proof. Let $$ h = 1 + c \{\varphi(\mathbf{x})\}^{-1} e^{f(\mathbf{x}; \beta_t)}, $$ $$ \mathbf{v} = \sqrt{e^{f(\mathbf{x}; \beta_t)}} \dot{g}_{(A)}(\mathbf{x}; \theta_t), $$ $$ \mathbf{f} = h^{1/2} \mathbf{v}, $$ $$ \mathbf{g} = h^{-1/2} \mathbf{v}. $$ We have that $$ \mathbb{E}(\mathbf{g} \mathbf{f}^{T}) = \mathbb{E}(\mathbf{f} \mathbf{g}^{T}) = \mathbb{E}(\mathbf{v} \mathbf{v}^{T}) = M_{(A)}, $$ $$ \mathbb{E}(\mathbf{f} \mathbf{f}^{T}) = M_{w(A)}, $$ $$ \mathbb{E}(\mathbf{g} \mathbf{g}^{T}) = \Lambda_{mscl(A)}. $$ Then, the matrix form of Cauchy-Schwarz's inequality [ref1], $$ \mathbb{E}(\mathbf{g} \mathbf{g}^{T}) \geq \mathbb{E}(\mathbf{g} \mathbf{f}^{T}) \{\mathbb{E}(\mathbf{f} \mathbf{f}^{T})\}^{-1} \mathbb{E}(\mathbf{f} \mathbf{g}^{T}), $$ leads to $$ \Lambda_{mscl(A)} \ge V_{w(A)}^{-1}, $$ which implies that $$ V_{mscl(A)} \le V_{w(A)}. $$ In the revision, we will elaborate on this point, provide a detailed proof, and further clarify why the IPW estimator is not the most efficient. [ref1] Tripathi, Gautam. "A matrix extension of the Cauchy-Schwarz inequality." Economics Letters 63.1 (1999): 1-3. **Q4. Theoretical optimality of the estimator** **R4.** The proposed penalized MSCL estimator is indeed the best in the sense that it is the most efficient estimator among a large class of asymptotically unbiased estimators. Specifically, the asymptotic variance $V_{mscl(A)}$ attains the Cramer-Rao lower bound for this class of estimators. This result formally establishes the estimator's optimality within the class. We will present this finding in detail, along with a detailed proof, in the future version of this paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it clarified a lot of my questions. I will be increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising the score on our paper!
Summary: The scale-invariant optimal subsampling function proposed in the paper addresses the challenge of inactive features in rare-events data by overcoming the issue of scale-dependence in exitsing optimal subsampling methods. In the context of variable selection for rare-events data, where distinguishing active and inactive features is very important, the scale-invariant optimal subsampling function minimizes prediction error regardless of scaling transformations applied to inactive features. The key is to provide reliable and unbiased subsampling probabilities even when inactive variables are transformed without altering the underlying model. This is important because inappropriate scaling transformations of inactive features can significantly impact traditional optimal subsampling methods, leading to unreliable or misleading results. This resluts in improving the accuracy of parameter estimation and variable selection. Strengths: Good novelty:The paper introduces a novel scale-invariant optimal subsampling function to address the challenge of inactive features in rare-events data, offering a new approach to improving parameter estimation and variable selection in sparse moedls. This novel contribution could advance the field of rare-events data analysis. Strong theory:The paper establishes a theoretical foundation for the proposed scale-invariant optimal subsampling function, including discussions on assumptions, proofs, and the mathematical framework supporting the method. This strong theoretical basis enhances the credibility and rigor of the proposed approach. WHile i have not gone through all the proofs, i think the results seem sound. Sufficient empirical evidence: Further, the paper demonstrates the practical relevance of the scale-invariant optimal subsampling function through numerical experiments using both simulated and real-world datasets.The results show non-trivial improvements in prediction error minimization and estimation efficiency. Clarity: The paper is largely well-written,, provides clear and detailed explanations of the methodology, assumptions, and implications of the scale-invariant optimal subsampling function, making it accessible even though there is a lot of notation and theory. Weaknesses: I think this is a good paper that should be accepted. Minor gripe: It is not clear to me how exactly the methods in section 1 are affected due to x (and thereby making them NOT scale invariant). I understand this would require going through the earlier papers e..g [21], which i did not do. It would make the paper complete if the authors could adda discussion to this effect in the appendix in the final version. Technical Quality: 3 Clarity: 3 Questions for Authors: Why is the tr(variance) sufficient ot be minimized in the presented results ? In general most of the discussion revolves around assymptotic results, is it interesting to ask questions about non-assymptotic behaviors, such as how would performance of variable selection impact non-assymptotic prediction error? Further, many practical applications could be overparametrized/high-dimensional, how would the results carry over? If not, what would be complications that arise while generalizing them to such settings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and positive review of our paper. We appreciate your constructive suggestion on providing a detailed discussion on why the A-OS and L-OS in Section 1 are affected by the scaling of the covariate $\mathbf{x}$. Yes, we also believe the point would be clarified by the formulas for the A-OS and L-OS probabilities derived from [21] for producing Figure 1. In the final version, we will add these details along with a comprehensive discussion of the example in Section 1 to the appendix. Please see our responses to your specific questions below. **Q1. Sufficiency of minimizing $tr(\mathbf{V}_{w(\mathcal{A})})$.** **R1.** The subsample estimator is asymptotically unbiased (Theorem 1), implying that lower variance indicates a better subsample estimator. Optimal subsampling aims to improve estimation efficiency by finding subsampling probabilities that minimize variance. However, due to the lack of complete ordering in variance matrices, the trace is often used as a criterion for minimization. For an asymptotically unbiased estimator, the trace ${tr}(\mathbf{V}_{w(\mathcal{A})})$ is interpreted as the asymptotic mean squared error (MSE), making its usage appreciated in practice. **Q2. Discussions on non-asymptotic behaviors.** **R2.** Non-asymptotic behaviors, such as prediction error, are indeed of interest in variable selection. Our numerical experiments actually assess these non-asymptotic performance. However, non-asymptotic theoretical results are typically expressed as error bounds, which may be less applicable for defining optimal subsampling probabilities. The asymptotic distribution, on the other hand, provides a direct approximation to the distributional behavior of the subsample estimator, which makes it more suitable for defining optimal subsampling probabilities. We do not intend to downplay the importance of non-asymptotic behaviors. In fact, we may investigate the theoretical non-asymptotic behaviors of our estimator in future research. **Q3. Applications in overparameterized/high-dimensional problems.** **R3.** For overparameterized/high-dimensional problems, if the model is sparse enough that the true model is low-dimensional, we conjecture that results such as the scale-invariant probabilities and the theoretical properties of the MSCL estimator still hold under some additional assumptions. However, if the model is not sparse, our results do not carry over. One reason is that asymptotic normality may no longer hold. While it may be possible to derive some non-asymptotic error bounds, it is unclear how to use them to define better subsampling probabilities for the problem in consideration. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the clarifications. I would recommend updating the paper with these to make it stronger. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable comments and suggestions. We will incorporate these points and update the paper accordingly.
Summary: This paper introduces an optimal subsampling algorithm designed for imbalanced data with inactive features. The primary algorithm can be summarized in three steps: 1. Train a pilot model using a lasso-penalized objective. 2. Subsample the data based on the pilot model. 3. Train the subsampled data using an adaptive lasso-penalized algorithm. Additionally, the paper proposes two options for the model estimator: the IPW estimator and the more accurate MSCL estimator. Empirical results demonstrate the algorithm's effectiveness, showing that it outperforms the baseline estimator which does not apply variable selection. Strengths: 1. The paper addresses a key drawback of a previous "optimal estimator" for rare-events data by incorporating a lasso-based estimator. 2. The proposed algorithm is theoretically sound with solid asymptotic guarantees. 3. Empirical results on both synthetic and real data highlight the algorithm's effectiveness. Weaknesses: No technical weaknesses were observed in this paper. Technical Quality: 4 Clarity: 4 Questions for Authors: In the first stage screening of Algorithm 2, we use the Lasso estimator to manage computational costs and noise in the sampled data. If more computational resources are available, is Lasso always the best practical choice for pilot estimation? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: No significant limitations were observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback on our work along with the insightful question. Please see below for our response. **Q1. If more computational resources are available, is Lasso always the best practical choice for pilot estimation?** **R1.** In general, better pilot estimation leads to a better final estimator. While we recommend the Lasso method, it may not be the best choice in all cases. More computationally intensive methods can achieve higher accuracy than the Lasso. However, if more computational resources are available, it is more beneficial to allocate additional resources to subsequent steps rather than focusing solely on pilot estimation. Although we cannot assert that the Lasso is always the best choice, it is a robust and practical option for the problem at hand. We recommend using the Lasso method for pilot estimation due to its simultaneous parameter estimation and active set selection. In addition, it tends to be conservative, meaning it has a low risk of excluding important variables. Other methods that combine parameter estimation and variable selection, such as sure independence screening, may also be considered. We do not recommend using the maximum likelihood estimator (MLE) under the full model because including many inactive variables reduces the efficiency of the approximated optimal subsampling function. For further illustration, we conducted a preliminary simulation to compare the performance of the Lasso pilot (Lasso), Sure Independence Screening (SIS) pilots, and the Lasso-followed-by-MLE pilot (LassoMLE). The SIS requires to pre-assign the number of variables to select, for which we considered 3, 6, 13, and 20, and they are labeled as SIS3, SIS6, SIS13, and SIS20, respectively. The LassoMLE calculates the MLE based on the variables selected by the Lasso. Below are the eMSEs of the final P-OS estimator from 200 iterations of the simulation for Section 5.1 Case C of the main paper: | $\rho$ | Lasso | SIS3 | SIS6 | SIS13 | SIS20 | LassoMLE | |-------:|------:|------:|------:|------:|------:|---------:| | 0.0025 | 0.031 | 0.735 | 0.026 | 0.039 | 0.049 | 0.053 | | 0.0050 | 0.026 | 0.73 | 0.017 | 0.029 | 0.033 | 0.039 | | 0.0075 | 0.015 | 0.729 | 0.015 | 0.021 | 0.026 | 0.030 | The table above reveals that the SIS pilot may yield better results than the Lasso pilot when the pre-assigned number of variables is 6 (SIS6). However, determining the optimal pre-assigned number of variables in practice requires further investigation. On average, the Lasso pilot selects 13 variables and shows better performance than the SIS13 pilot. Unexpectedly, we found that the LassoMLE pilot performed worse than the Lasso pilot. Upon further examination, we discovered that the LassoMLE produces larger absolute estimates for some inactive variables. Consequently, these inactive variables receive a smaller penalty in the final adaptive Lasso estimator, as the penalty is proportional to $1/|\hat{\beta}|$. We greatly appreciate your insightful question, which has led to these additional interesting findings. The choice of the optimal pilot estimation method is indeed critical in practice, especially for sparse models. We plan to further investigate the effect of different pilot estimation procedures on the performance of the proposed algorithm.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and effort in assessing our work. Their insightful comments and questions are valuable in helping us improve the paper. Please find our individual responses to the reviewers' comments and questions below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games
Accept (poster)
Summary: This work studies first-order methods for equilibrium computation in extensive form games (EFGs) and the effect of the regularization choice (distance-generating function). The key parameter for the performance of these methods is the ratio between the strong convexity modulus and the diameter of the regularizer. The main result is that a particular regularizer, namely the dilated entropy, is an optimal choice in terms of the aforementioned ratio. This regularizer can be used to recover prior results in EFGs such that the regret performance of Kernelized OMWU. Strengths: The main result of the paper is essentially a toolbox for the dilated entropy (DilEnt) regularizer that can be then employed to get improved regret bounds. The main corollary of this toolbox is that running OMD with DilEnt gives the same performance as Kernelized OMWU, which employs the "normal-form reduction" of the EFG. I find this result interesting since it provides a nice unifying tool between prior works and the power of first-order methods. The paper is in general well-written, some parts are a little bit dense and the notation is heavy (this is a general issue in this line of work). Essentially, the main result of the paper is a matching-performance first-order method compared to KOMWU. I think that the problem studied by the authors has a concrete motivation and the solution is nice. Hence, I believe that the paper is above the borderline for acceptance. Weaknesses: In terms of contribution, I believe that the main result complements well the existing literature. I do not have some concrete weakness to point out but I have a couple of clarifying questions for the authors. Technical Quality: 4 Clarity: 4 Questions for Authors: 1) Is there some motivation in preferring a FOM rather than kernelization? 2) In the Clairvoyant OMD result, is the regret bound achieved by an efficient algorithm? Because clairvoyance requires solving a fixed-point problem, right? 3) Could the results obtained by either first-order methods with DilEnt or KOMWU hold for last-iterate convergence? 4) Regarding the lower bound of Theorem 5: is the result in the adversarial setting, where one player employs some algorithm and the other players decide "in the worst case" for the algorithm? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Q1: “Is there some motivation in preferring a FOM rather than kernelization?” A1: Given that FOMs have been extensively explored in the literature, they can benefit from the existing techniques. An example of this is the adoption of the COMD approach in our paper, which leads to a better theoretical convergence rate for learning CCE in extensive-form games. Also, FOMs are more flexible than kernelization, in general. With kernelization, one would be restricted to (variants of) the multiplicative weights update algorithm. In contrast, our DilEnt result can be applied to any first-order method that requires a strongly convex regularizer (including adaptive-stepsize methods, mirror prox, Nesterov’s excessive gap technique, etc., which do not have a kernelization equivalent). Furthermore, the analytic tools we introduce, including the treeplex norms, are likely of independent interest for first-order methods on extensive-form strategy sets and might apply to other regularizers as well. Q2: “In the Clairvoyant OMD result, is the regret bound achieved by an efficient algorithm? Because clairvoyance requires solving a fixed-point problem, right?” A2: Yes, COMD is indeed efficient. The fixed-point problem is solved via fixed-point iteration. By proving that the fixed-point iteration steps are contractions with respect to the treeplex norm (Lemma 4.5), the fixed-point subproblem is guaranteed to be solved at $\varepsilon$-approximation in logarithmic iterations in $1/\varepsilon$. Q3: “Could the results obtained by either first-order methods with DilEnt or KOMWU hold for last-iterate convergence?” A3: While this paper was not concerned with last-iterate convergence guarantees, by applying our new analysis framework to the result in [1], it should be straightforward to establish last-iterate convergence guarantees with a better dependency on game size in the settings for which [1] guarantees convergence. Q4: “Regarding the lower bound of Theorem 5: is the result in the adversarial setting, where one player employs some algorithm and the other players decide "in the worst case" for the algorithm?” A4: Yes, the result given by Theorem 5.1 is in the adversarial setting, which exactly matches the upper bound provided in Theorem 4.4. This is the standard online learning setting in which the usual lower bounds (for example, $\Omega(\sqrt{T \log n})$ for a probability simplex) are given. Lower bounds in the learning-in-games settings are widely unknown. [1] Lee, Chung-Wei, Christian Kroer, and Haipeng Luo. "Last-iterate convergence in extensive-form games." Advances in Neural Information Processing Systems 34 (2021): 14293-14305.
Summary: The paper examines Distance Generating Functions (DGFs), a framework developed for providing online learning and equilibrium-computing first-order methods for extensive-form games (EFGs). Specifically, DGFs were introduced in the literature as a form of regularization tailored to the strategy space of EFGs. Combining Online Mirror Descent with DGFs leads to first-order methods for equilibrium computation in EFGs. A DGF $\phi$ is required to be $\mu$-strongly convex with respect to some norm. An important quantity of interest is the ratio between the diameter introduced by the Bregman divergence induced by $\phi$ and the $\mu$-strong convexity constant, i.e., $\mathcal{D_\phi}/\mu$. This ratio appears in the regret bounds and affects the convergence rates. Consequently, the authors investigate which DGF achieves the best diameter/strong convexity ratio. They demonstrate that a specific form of DGF, the weight-one dilated entropy (DilEnt), achieves the optimal diameter/strong-convexity ratio up to logarithmic factors out of all DGFs. More precisely, DilEnt admits an $\mathcal{O}(\log |V|)$ diameter/strong-convexity ratio, where $V$ denotes the set of possible pure strategies for the EFGs. Additionally, they demonstrate that combining DilEnt with the recently proposed framework of Clairvoyant Mirror Descent can lead to state-of-the-art convergence rates for computing Coarse Correlated Equilibrium (CCE) in EFGs. Strengths: The DGF framework has been instrumental in designing no-regret and efficient equilibrium computation algorithms for EFGs. Consequently, I believe that characterizing the DGF that achieves the optimal diameter/strong-convexity ratio is a significant result for the game theory community. Additionally, the paper introduces intriguing technical concepts by incorporating primal-dual treeplex norms to bound the diameter/strong-convexity ratio. Finally, by combining their approach with the Clairvoyant framework, the paper presents a substantial improvement in convergence rates for computing Coarse Correlated Equilibrium (CCE) in EFGs, reducing the rate from $\mathcal{O}(\log^4 T/T)$ to $\mathcal{O}(\log T/T)$. Weaknesses: Despite the fact that all the presented results are interesting and introduce compelling technical ideas, I have some doubts about the main takeaway of the paper. As far as I understand, DilEnt was introduced by [1] to establish an equivalence between Mirror Descent and the kernelized approach of [2], which already achieved the $\mathcal{O}(\sqrt{\log |V| T })$ regret bounds for EFGs. From this perspective, the $\mathcal{O}(\log |V|)$ bound on the diameter/strong-convexity ratio is somewhat expected. While I acknowledge that establishing this bound is far from trivial and technically challenging, it can also be seen as an alternative way of retrieving an already known result. The main takeaway message of the paper might be the lower bound, which establishes that the results of [2] cannot be improved by adopting the DGF approach. If this is the case, highlighting this point would greatly benefit the paper. Regarding the improved rate for CCE through the use of Clairvoyant Mirror Descent, is the current result truly necessary to achieve this improvement? I am not entirely sure, but I have a feeling that coupling the clairvoyant approach with the previous results of [2] would result in the same improvement. That being said, I believe this is an interesting paper, and I am willing to further increase my score if these points are addressed. [1] Bai et al. Efficient phi-regret minimization in extensive-form games via online mirror descent [2] Farina et al. Kernelized multiplicative weights for 0/1-polyhedral games: Bridging the gap between learning in extensive-form and normal-form games Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses above. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Q1: “The main takeaway message of the paper might be the lower bound, which establishes that the results of [2] cannot be improved by adopting the DGF approach. If this is the case, highlighting this point would greatly benefit the paper.” A1: Thanks for the suggestions. In our opinion, however, the main takeaway of the paper is twofold. One is the lower bound, which suggests that KOMWU [2] is nearly optimal—as you said. The other, which we think is very important, is the analysis technique based on the treeplex norms. We believe that this contribution creates important analytical tooling that can be of independent interest for future research on regularization of these important combinatorial domains. As a byproduct, we are able to establish that the DilEnt regularizer achieves the near-optimal diameter-to-strong-convexity ratio for EFGs with full-information feedback. This also reconciles the more ad-hoc analysis based on kernelization (references [1,2] in your review) with the standard analysis of first-order methods and mirror descent, resolving the disconnect in the analysis. Q2: “Regarding the improved rate for CCE through the use of Clairvoyant Mirror Descent, is the current result truly necessary to achieve this improvement?” A2: This question is more speculative and it’s hard to know for sure. But, we are fairly confident that the new analytic tools we introduced, that are in line with the standard analysis of mirror descent-based algorithms, provide the natural groundwork for analyzing the algorithm. In particular, Clairvoyant MWU uses fixed-point iterations on a map that has to be shown to be contractive with respect to some norm. The treeplex norm is a very natural candidate for extensive-form games, as we show in the paper. It does not seem immediately straightforward to replicate these results for the kernelization approach. [1] Bai, Yu, et al. "Efficient Phi-regret minimization in extensive-form games via online mirror descent." Advances in Neural Information Processing Systems 35 (2022): 22313-22325. [2] Farina, Gabriele, et al. "Kernelized multiplicative weights for 0/1-polyhedral games: Bridging the gap between learning in extensive-form and normal-form games." International Conference on Machine Learning. PMLR, 2022. --- Rebuttal Comment 1.1: Title: Reviewer's response Comment: Thank you for the response. I still do not understand why the kernelization approach cannot be directly coupled with the Clairvoyant framework. I believe that the latter should be elaborated in detail in the revised manuscript. Nevertheless I think that the paper is interesting and I will keep my score.
Summary: This paper studies the optimality of diluted entropy functions for online learning in extensive form games. The weighted one diluted entropy is shown to be optimal up to logarithmic factors, for which a new lower bound is also provided. Strengths: The paper studies a very relevant problem of the optimal choice of diluted entropy function, as it provides insights into the practical implementation of online learning with diluted entropy in real-world applications. The result is that the weight-one diluted entropy and the new matching lower bound is an addition to the field of online learning in games. Additionally, the new analysis of OMD, with the use of treeplex norms, is also new (to my best knowledge), and may be of separate interest for solving extensive form games. Weaknesses: I would suggest adding some more explanation and providing some hindsight of the proof of the theorems in the main paper, i.e. how are the treeplex, diameter-to-strong-convexity ratio of DilEnt used. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you provide some hindsight on how are the treeplex, diameter-to-strong-convexity ratio of DilEnt used? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Q: “Could you provide some hindsight on how are the treeplex, diameter-to-strong-convexity ratio of DilEnt used?” A: The ratio between diameter and strong convexity of the regularizer on the feasible set (in the case of extensive-form games, the feasible set of every player is their sequence-form strategy polytope, also known as treeplex) is a key quantity in the performance of mirror descent-based algorithms. Intuitively, the performance degrades the more the diameter is large (as there is more “space” to search over), and increases the more the regularizer is bowl-shaped (i.e., strongly convex), as the minimum of the function becomes easier to predict. Of course, there is a tradeoff between these two parameters: if the regularizer is too strongly convex, then it grows really fast, making the diameter of the set $D_\varphi := \max_{x \in Q} D_\varphi(x \| x_1) = \max_{x \in Q} \varphi(x) - \min_{x \in Q} \varphi(x)$, larger. Thanks for the suggestion, we will include this intuitive discussion in the revision.
Summary: The paper offers significant advancements in understanding the efficacy of distance-generating functions (DGFs) for extensive-form games (EFGs), specifically focusing on the optimization and application of first-order methods (FOMs). Central to the study is the exploration of the weight-one dilated entropy (DilEnt) regularizer, which is proven to be optimal up to logarithmic factors for these games. Overall, the paper makes a strong contribution to the field of algorithmic game theory, particularly in optimizing computational approaches for EFGs, and sets a new benchmark for subsequent research in this area. It opens up several avenues for further exploration, particularly in refining these methods for broader classes of games and in practical, real-world settings. Strengths: - The paper successfully establishes that DilEnt provides an optimal distance-generating function in terms of the diameter-to-strong convexity ratio for the strategy spaces in EFGs. - By developing new primal-dual treeplex norms, the authors enhance the analytical framework for FOMs in EFGs, allowing for more precise performance predictions that align with the capabilities of the Kernelized Optimistic Multiplicative Weight Update (KOMWU) algorithm. - The research presents a comprehensive analysis of the DilEnt regularizer's performance, including establishing new state-of-the-art approximation rates for achieving coarse correlated equilibrium in multiplayer games. Weaknesses: - While the paper makes substantial theoretical advancements, there are less empirical validations of the proposed methods across real-world datasets or diverse game settings, which might be necessary to fully understand the practical implications. - The work primarily focuses on EFGs with perfect recall and does not extensively address scenarios with imperfect recall or partial information, which could be relevant for a broader range of applications Technical Quality: 3 Clarity: 3 Questions for Authors: Is it possible to conduct some proof of concept experiments to validate the theoretical findings? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discussed in their conclusions limitations and future directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback and your praise of our paper! We address the two weaknesses you mentioned: - Your claim that our paper does not deal with “partial information” is wrong. The techniques of the paper apply to imperfect-information extensive-form games. You are however right that the paper focuses on the case of perfect-recall games (i.e., players that do not forget their observations). This is a standard assumption. Equilibrium computation and learning in imperfect-recall games is known to be computationally intractable. - Regarding empirical validation: you are right that we dedicated the entire paper on developing theoretical contributions. However, other work has examined the empirical performance of the DilEn regularizer (e.g., [1]), though without the theoretical understanding and machinery that we introduce in this paper to analyze its metric properties. [1] Lee, Chung-Wei, Christian Kroer, and Haipeng Luo. "Last-iterate convergence in extensive-form games." Advances in Neural Information Processing Systems 34 (2021): 14293-14305. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have read the rebuttal and decide to keep my score.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Robust Contrastive Multi-view Clustering against Dual Noisy Correspondence
Accept (poster)
Summary: The paper presents a novel method, Contextually-spectral based correspondence refinery (CANDY), to address the Dual Noisy Correspondence (DNC) problem in contrastive multi-view clustering (MvC). CANDY utilizes inter-view similarities as context and employs a spectral-based module for denoising correspondence, effectively mitigating the influence of false positives and uncovering false negatives. The method's effectiveness is demonstrated through extensive experiments on five multi-view benchmarks, outperforming eight competitive MvC methods. Strengths: - CANDY introduces a new perspective to handle DNC, which is practical and challenging in MvC. This paper demonstrated potential capacity as a plug-and-play solution to enhance the robustness of existing contrastive MvC methods. - The combination of context-based semantic mining and spectral-based denoising provides a robust solution against noisy correspondence. Extensive experiments and comparisons with state-of-the-art methods validate the effectiveness of CANDY. Weaknesses: 1. How will the choice of the Gaussian kernel parameter $\sigma$ influence the performance of CANDY? The sensitivity of the clustering results to this parameter needs to be thoroughly examined, as different values of $\sigma $ might affect the affinity graph’s construction. 2. Can the authors provide insights into the computational overhead introduced by the CANDY method? 3. How are the false negative and false positive proportions handled in the experiments? Further clarification is needed on how this proportion is maintained across different datasets and experimental conditions. Does the false negative proportion keep fixed as 1/K in Table 1? 4. An interesting question arises from the paper’s claim that CANDY can serve as a plug-and-play solution. Is it possible to integrate the pseudo target generated by CANDY into other contrastive multi-view clustering methods beyond DIVIDE? Exploring this potential would demonstrate the versatility of CANDY and its applicability to a broader range of clustering techniques, providing valuable insights into how the pseudo target can enhance other methods’ robustness against noisy correspondences. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your acknowledgment of our method. We will address your questions one by one. > ***Question 1**: **How will the choice of the Gaussian kernel parameter σ influence the performance** of CANDY? The sensitivity of the clustering results to this parameter needs to be thoroughly examined, as different values of σ might affect the affinity graph's construction.* Thank you for your suggestions. As you suggested, we conduct parameter analysis experiments to investigate the influence of $\sigma$ on the Caltech-101 dataset by varying the value of $\sigma$ from [0.01, 0.23] with an interval of 0.02. The results are summarized in the following table. | Dataset \\ $\sigma$ | 0.01 | 0.03 | 0.05 | 0.07 | 0.09 | 0.11 | 0.13 | 0.15 | 0.17 | 0.19 | 0.21 | 0.23 | | -------------------- | ----- | ----- | --------- | ----- | ----- | ----- | --------- | ----- | ----- | ----- | ----- | ----- | | Caltech-101 | 16.13 | 16.75 | **60.97** | 60.71 | 60.31 | 51.41 | 47.22 | 41.97 | 37.14 | 34.72 | 32.37 | 32.98 | From the results, one could observe that our method performs stably in the range of [0.05, 0.09] for $\sigma$. Thus, we simply set the Gaussian kernel parameter $\sigma$ as 0.07 for all experiments in the paper. For a clear observation, we include the plot of the result as Figure S1 in the rebuttal PDF. > ***Question 2**: Can the authors provide insights into the **computational overhead** introduced by the CANDY method?* As you suggested, we run DIVIDE and our CANDY with the same network architecture for a fair comparison, and report their running time (in seconds, per epoch) in the following table. | Model | Caltech101 | LandUse21 | NUSWIDE | Reuters | Scene15 | | ------------ | ---------- | --------- | ------- | ------- | ------- | | DIVIDE | 0.1438 | 0.0647 | 0.2286 | 0.4617 | 0.1346 | | CANDY (Ours) | 2.4971 | 0.7010 | 2.1663 | 4.6905 | 1.2743 | As indicated in the table, CANDY takes approximately 10 times the computational time per epoch compared to DIVIDE. The additional computational overhead is mainly attributed to the Singular Value Decomposition (SVD) procedure in our approach. In the future, we plan to explore lightweight and efficient alternatives for SVD. > ***Question 3:** How are the false negative and false positive proportions handled in the experiments? Further clarification is needed on **how this proportion is maintained across different datasets and experimental conditions**. Does the false negative proportion keep fixed as 1/K in Table 1?* As per your suggestions, we have given the factual noise ratios for both false positives (FPs) and false negatives (FNs) in the following table. Specifically, we set one view as the anchor view, and simulate FP samples by randomly shuffling the other view by a given percentage, denoted by FP ratio $\eta$. FN is inherent to the dataset. For example, the FN ratio of a dataset is $1 / K$ if its samples are evenly distributed in $K$ classes. | $\eta$ | Caltech101 | | LandUse21 | | NUSWIDE | | Reuters | | Scene15 | | | ------ | ---------- | ---- | --------- | ---- | ------- | ---- | ------- | ----- | ------- | ---- | | | FP | FN | FP | FN | FP | FN | FP | FN | FP | FN | | 0.0 | 0.00 | 2.84 | 0.00 | 4.73 | 0.00 | 9.99 | 0.00 | 21.40 | 0.00 | 6.91 | | 0.2 | 19.34 | 2.84 | 19.10 | 4.73 | 17.98 | 9.99 | 15.67 | 21.40 | 18.68 | 6.91 | | 0.5 | 48.45 | 2.84 | 47.33 | 4.73 | 45.07 | 9.99 | 39.64 | 21.40 | 46.42 | 6.91 | | 0.8 | 77.48 | 2.84 | 76.24 | 4.73 | 72.03 | 9.99 | 62.57 | 21.40 | 73.98 | 6.91 | In the next version, we will detail the simulation procedure of noisy correspondence and the practical noise ratios in our experiments. > ***Question 4**: An interesting question arises from the paper's claim that CANDY can serve as a plug-and-play solution. Is it possible to **integrate the pseudo target generated by CANDY into other contrastive multi-view clustering methods** beyond DIVIDE?* Our research has considered the broader applicability of CANDY, and we have conducted experiments to demonstrate its integration with other methods beyond DIVIDE. In Section 4.6 of our manuscript, we present the results of integrating CANDY with another model, AECoKM, on the NUS-WIDE dataset. Figure 5(c) summarizes the performance improvements observed with AECoKM under different false positive (FP) ratios. For your convenience, we have converted the figure as a table and attached it below. | FP Ratio | | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | | ------------- | --- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | | AECoKM | ACC | 15.80 | 15.70 | 15.86 | 16.94 | 18.36 | 16.86 | 17.64 | 18.02 | 17.27 | 18.37 | | | NMI | 2.56 | 2.56 | 2.38 | 3.67 | 5.62 | 3.36 | 4.28 | 4.27 | 3.59 | 4.16 | | | ARI | 1.12 | 1.14 | 1.10 | 1.57 | 2.42 | 1.49 | 1.98 | 2.15 | 1.84 | 2.20 | | AECoKM + Ours | ACC | 57.88 | 50.25 | 55.68 | 56.72 | 54.89 | 54.82 | 53.85 | 56.12 | 54.93 | 53.44 | | | NMI | 46.83 | 39.04 | 45.27 | 44.27 | 41.94 | 42.48 | 40.98 | 42.61 | 41.43 | 40.51 | | | ARI | 37.98 | 30.63 | 36.03 | 36.21 | 33.94 | 34.37 | 33.49 | 35.22 | 34.14 | 33.00 | --- Rebuttal Comment 1.1: Comment: Thanks very much for the detailed response. The authors have provided solid and convincing experimental results to answer my problems. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive recognition and assessment of our work!
Summary: The authors delve into the study of contrastive multi-view clustering (MVC) and aim to address the false positive and false negative correspondence issues, collectively referred to as dual noisy correspondence. To tackle this problem, they propose a two-fold solution named CANDY. Firstly, CANDY exploits inter-view similarities as context to uncover false negatives. Secondly, it employs a spectral-based module to denoise correspondences, thereby mitigating the negative impact of false positives. One of the most interesting insights in this paper is the observation that context can serve as a new axis for transforming data similarity into a high-order affinity space. To verify the effectiveness of CANDY, the authors conduct experiments across various datasets and settings. Strengths: i) This paper is well-written and structured, making it accessible to readers even outside the immediate research community. Additionally, the experiment designs are intriguing, and the extensive experiments convincingly demonstrate the effectiveness of the proposed methods and the two corresponding modules. ii) The proposed method is technically sound. Moreover, it can serve as a plug-and-play module that can be integrated into other contrastive MVC methods, enhancing their robustness. Weaknesses: i) I have some doubts regarding the results in Figure 4. Firstly, how is the cross-view similarity matrix arranged? The current form appears to be arranged by the ground truth, whereas my understanding is that the ground truth should be agnostic during training. Secondly, how is robustness compared in the figure? While the proposed method can recall more false negative samples, it seems to introduce more wrongly recalled samples. How is the trade-off between truly and falsely recalled samples balanced? ii) Regarding the references in Line 31, is the observation in Fig. 1(a) first proposed in this paper? iii) Two open-world questions. Firstly, robustness against incomplete views is another important topic in the MVC community. Some baselines in the paper address this issue. Can the proposed method be extended to handle incomplete views? If so, this would significantly enhance the paper’s contribution. Secondly, the approach is mainly presented in scenarios involving two views. How could the method be extended to handle more views (≥ 3)? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive reviews and suggestions. Below, we will address each of your questions. > ***Question 1.1**: I have some doubts regarding the results in Figure 4. Firstly, **how is the cross-view similarity matrix arranged**? The current form appears to be arranged by the ground truth, whereas my understanding is that the ground truth should be agnostic during training.* The cross-view similarity matrix depicted in Figure 4 is arranged according to the ground-truth labels. To be specific, we first compute the similarities of cross-view representations and then arrange them according to the labels. In other words, each diagonal block in the matrix represents the similarities of within-class samples. Thanks to the matrix re-arrangement manner, it allows us to investigate the false negatives recall capacity by comparing the consistency between the cross-view similarity matrix and the ground-truth one. Notably, the labels are only utilized for the above analysis experiment, while keeping agnostic during both the training and inference phases. > ***Question 1.2**: Secondly, **how is robustness compared in the figure**? While the proposed method can recall more false negative samples, it seems to introduce more wrongly recalled samples. How is the trade-off between truly and falsely recalled samples balanced?* To quantitatively compare the robustness between the baselines and our method, we additionally report the Mean Squared Error (MSE) to measure the difference between each cross-view similarity matrix and the ground-truth one in the figure. More specifically, the smaller MSE value indicates the greater robustness of the method against false negatives. Notably, as truly and falsely recalled samples in local regions cannot reflect the global difference, it is sub-optimal to measure the robustness by directly observing the cross-view similarity matrix. In contrast, the MSE value seeks a trade-off between truly and falsely recalled samples from the global perspective, thus being a promising metric to demonstrate the robustness against false negatives. > ***Question 2**: Regarding the references in Line 31, **is the observation in Fig. 1(a) first proposed in this paper**?* Yes, our work could be the first study to reveal the dual noisy correspondence (DNC) challenge in the multi-view clustering community. In brief, DNC refers to the noise present in both cross-view positive and negative pairs. Although the references in Line 31 reveal the noise in the training multimodal data, they either do not explicitly handle the noise [A,B,D,E], or only conquer false negative issue [C]. [A] Learning transferable visual models from natural language supervision. International Conference on Machine Learning, 2021. [B] Scaling up visual and vision-language representation learning with noisy text supervision. International Conference on Machine Learning, 2021. [C] Robust multi-view clustering with incomplete information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. [D] Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. Proceedings of ACL, 2018. [E] HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips. International Conference on Computer Vision, 2019. > ***Question 3.1**: Firstly, robustness against incomplete views is another important topic in the MVC community. Some baselines in the paper address this issue. Can the proposed method be extended to **handle incomplete views**? If so, this would significantly enhance the paper's contribution.* Thank you for your constructive suggestions. As you suggested, we follow DIVIDE [F] to endow our method with the capacity of handling incomplete views. In brief, we utilize the cross-view decoders to recover the missing samples and perform k-means on all the latent representations to achieve incomplete multi-view clustering. The results are summarized in the following table, where the results of baseline methods are copied from the DIVIDE paper. | Incomplete (50% missing) | Model| Scene15||| Caltech101 ||| Reuters||| LandUse21 ||| Average||| | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | ||| ACC| NMI| ARI| ACC| NMI| ARI| ACC| NMI| ARI| ACC | NMI| ARI| ACC| NMI| ARI| || SURE | 39.6 | 41.6 | 23.5 | 34.6 | 57.8 | 19.9 | 47.2 | 30.9 | 23.3 | 23.1| 28.6 | 10.6 | 36.1 | 39.7 | 19.3 | | | DIVIDE | **46.8** | **45.7** | **29.1** | 63.4 | 82.5 | 52.4 | **54.7** | **37.3** | **28.6** | **30.0**| **35.8** | **16.0** | **48.7** | **50.3** | 31.5 | || CANDY (Ours) | 40.0 | 40.2 | 24.1 | **69.5** | **83.9** | **65.5** | 54.2 | 34.8 | 27.2 | 28.8| 31.1 | 14.4 | 48.1 | 47.5 | **32.8** | From the results, one could observe that our method achieves promising performance in handling the incomplete view problem, even though it is primarily designed to address the dual noisy correspondence challenge. [F] Decoupled Contrastive Multi-view Clustering with High-order Random Walks. AAAI Conference on Artificial Intelligence, 2024. > ***Question 3.2**: Secondly, the approach is mainly presented in scenarios involving two views. **How could the method be extended to handle more views (≥ 3)**?* According to your comment, we extend CANDY to scenarios involving more than two views. Specifically, following [G], we conduct experiments on the Caltech7 dataset using two views (Caltech7-2V) and three views (Caltech7-3V). The results, summarized in the following table, demonstrate CANDY's capability in handling multiple views. | Method | FP ratio | Caltech7-2V | Caltech7-3V | | ------------ | -------- | ----------- | ----------- | | CANDY (Ours) | 0.0 | 52.16 | 60.37 | | | 0.5 | 41.96 | 47.93 | [G] Multi-level Feature Learning for Contrastive Multi-view Clustering. The IEEE / CVF Computer Vision and Pattern Recognition Conference, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response, which has effectively addressed my concerns. I will raise my score on the paper. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score! We appreciate the time and effort you dedicated to reviewing this work.
Summary: This paper addresses a new problem called dual noisy correspondence, which the authors claim is practical and underexplored in the multi-view learning community. Dual noisy correspondence refers to two challenges: 1) false positive correspondences induced by irrelevant multi-view data and 2) false negative correspondences caused by the random sampling characteristic of contrastive learning. To address this problem, the authors propose a novel metric for cross-view similarity estimation that recalls more false negative pairs, and a spectral-based denoising method to address false positive correspondences. Extensive experiments on multiple datasets validate the superiority of the proposed method compared to existing approaches. Overall, this paper makes an important contribution by tackling the practical and underexplored problem of dual noisy correspondence in multi-view learning. The technical solutions proposed, including the new similarity metric and spectral denoising, demonstrate strong empirical performance. Strengths: The paper introduces a novel perspective on cross-view similarity measurement. Specifically, the authors observe that the row-wise similarity in the first-order cross-view similarity matrix, referred to as context in the paper, can serve as an effective metric for similarity estimation. Experiments show that this approach recalls more potential false negative pairs. Extensive experiments on five benchmarks with four types of noise ratios comprehensively verify the effectiveness of the proposed method. Weaknesses: The authors claim to address a new problem, dual noisy correspondence (false positive and false negative). However, reviewers noted that similar issues have been explored in works [A] of other fields. Additional claims and discussions are needed to clarify the novelty of the proposed setting, which the authors regard as a significant contribution. [A] Cross-modal retrieval with noisy correspondence via consistency refining and mining, TIP. -How about the time cost of the proposed method? It is essential to compare this with other methods, particularly the peer method (e.g., DIVIDE) with the same network architecture. The reviewer suggests that the noise ratios for both false positive and false negative correspondences should be explicitly presented in the main table for a more comprehensive understanding of performance and comparisons. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) What is the novelty of the proposed setting? (2) What is the additional time cost? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review. We will address your questions one by one. > _**Question 1**: The authors claim to address a new problem, dual noisy correspondence (false positive and false negative). However, reviewers noted that similar issues have been explored in works [A] of other fields. Additional claims and discussions are needed to **clarify the novelty of the proposed setting**, which the authors regard as a significant contribution._ Thank you for pointing out the related work [A], which addresses both the false positive and false negative correspondence for the image-text retrieval task. Although [A] shares some similarities with our work, the definitions of false positives and false negatives are remarkably different. First, [A] is specialized for the retrieval task where the false positives emerge i.f.f. the given two samples do not belong to the same instance. In contrast, in our work, false positives are defined as wrongly matched pairs consisting of samples from different classes. Second, [A] seeks to mine semantic-consistent samples and regards them as false negatives, whereas our work considers arbitrary cross-view within-class samples as false negatives. Intuitively, given a false positive/negative pair, our work has a probability of $\frac{1}{K}$ for both false positive re-alignment and false negative recalling, while the probabilities for [A] are $\frac{1}{N}$, where $K$ is the number of classes and $N$ is the number of instances in the dataset. Thanks to the differences, it is difficult or even impossible to realign mismatched pairs in [A], while our method can achieve this for a much smaller $K$. [A] Cross-modal Retrieval with Noisy Correspondence via Consistency Refining and Mining. IEEE Transactions on Image Processing, 2024. > ***Question 2**: How about the **time cost** of the proposed method? It is essential to compare this with other methods, particularly the peer method (e.g., DIVIDE) with the same network architecture.* As you suggested, we run DIVIDE and our CANDY with the same network architecture for a fair comparison, and report their running time (in seconds, per epoch) in the following table. | Model | Caltech101 | LandUse21 | NUSWIDE | Reuters | Scene15 | | ------------ | ---------- | --------- | ------- | ------- | ------- | | DIVIDE | 0.1438 | 0.0647 | 0.2286 | 0.4617 | 0.1346 | | CANDY (Ours) | 2.4971 | 0.7010 | 2.1663 | 4.6905 | 1.2743 | As indicated in the table, CANDY takes approximately 10 times the computational time per epoch compared to DIVIDE. The additional computational overhead is mainly attributed to the Singular Value Decomposition (SVD) procedure in our approach. In the future, we plan to explore lightweight and efficient alternatives for SVD. > ***Question 3**: The reviewer suggests that the **noise ratios for both false positive and false negative correspondences should be explicitly presented in the main table** for a more comprehensive understanding of performance and comparisons.* As per your suggestions, we have given the factual noise ratios for both false positives (FPs) and false negatives (FNs) in the following table. Specifically, we set one view as the anchor view, and simulate FP samples by randomly shuffling the other view by a given percentage, denoted by FP ratio $\eta$. FN is inherent to the dataset. For example, the FN ratio of a dataset is $1 / K$ if its samples are evenly distributed in $K$ classes. | $\eta$ | Caltech101 | | LandUse21 | | NUSWIDE | | Reuters | | Scene15 | | | ------ | ---------- | ---- | --------- | ---- | ------- | ---- | ------- | ----- | ------- | ---- | | | FP | FN | FP | FN | FP | FN | FP | FN | FP | FN | | 0.0 | 0.00 | 2.84 | 0.00 | 4.73 | 0.00 | 9.99 | 0.00 | 21.40 | 0.00 | 6.91 | | 0.2 | 19.34 | 2.84 | 19.10 | 4.73 | 17.98 | 9.99 | 15.67 | 21.40 | 18.68 | 6.91 | | 0.5 | 48.45 | 2.84 | 47.33 | 4.73 | 45.07 | 9.99 | 39.64 | 21.40 | 46.42 | 6.91 | | 0.8 | 77.48 | 2.84 | 76.24 | 4.73 | 72.03 | 9.99 | 62.57 | 21.40 | 73.98 | 6.91 | In the next version, we will detail the simulation procedure of noisy correspondence and the practical noise ratios in our experiments.
Summary: The manuscript "Robust Contrastive Multi-view Clustering against Dual Noisy Correspondence" addresses the Dual Noisy Correspondence (DNC) issue in contrastive multi-view clustering (MvC), where noise affects both positive and negative data pairs. The authors propose CANDY (Contextually-spectral based correspondence refinery), a method that uses inter-view similarities to identify false negatives and a spectral-based module to denoise correspondences, thus mitigating the impact of false positives. Extensive experiments on five multi-view benchmarks demonstrate CANDY's effectiveness over existing methods. Strengths: -The paper introduces the DNC problem, a novel and practical challenge in MvC, highlighting the dual nature of noise affecting both positive and negative pairs. -CANDY combines two innovative components: the Context-based Semantic Mining (CSM) module, which identifies false negatives using high-order contextual affinities, and the Spectral-based Correspondence Denoising (SCD) module, inspired by signal processing techniques, to filter out false positives. This dual approach ensures robust clustering by comprehensively addressing both types of noise. -The proposed method consistently outperforms state-of-the-art multi-view clustering methods on various benchmarks, demonstrating its robustness and effectiveness. The extensive experiments, varying the ratio of false positives, provide thorough validation of CANDY’s capabilities. Weaknesses: -The method's applicability to other types of multi-view learning tasks, such as classification or retrieval, is not explored, limiting its broader impact within the multi-view learning domain. Including classification results would enhance the paper. -Visualization results, such as those in Figure 3, should be compared with other baseline methods to provide detailed insights into how CANDY's clustering performance or robustness visually differs from that of other methods, particularly in handling noisy correspondences. Technical Quality: 3 Clarity: 4 Questions for Authors: How is the pseudo target C defined in Equation 1 of the proposed method? Please provide a detailed explanation of its construction and role within the CANDY framework. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive reviews and suggestions. We will address your questions one by one. > ***Question 1**: The method's **applicability to other types of multi-view learning tasks**, such as classification or retrieval, is not explored, limiting its broader impact within the multi-view learning domain. Including classification results would enhance the paper.* Thank you for your constructive suggestion. We follow [A] to endow our method with the capacity to handle the classification problem. To achieve multi-view classification, we randomly select 80% of the data as the training set, and use the remaining 20% as the test set. Then we simulate 50% noisy correspondence in the training set, while keeping the test set clean. After training the model using our CANDY method as the feature extractor, we train a support vector machine (SVM) on the features of the train set. Finally, we use the SVM to predict the labels on the features of the test set. The results are summarized in the following table. | Model | Caltech-101 | NUS-WIDE | | - | - | - | | SURE [A] | 56.65 | 48.68 | | DIVIDE [B] | **76.12** | 50.06 | | CANDY (Ours) | 72.97 | **61.02** | Our method achieves competitive results on these datasets, justifying the applicability of our method to the classification task. [A] Robust Multi-View Clustering With Incomplete Information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. [B] Decoupled Contrastive Multi-view Clustering with High-order Random Walks. AAAI Conference on Artificial Intelligence, 2024. > ***Question 2**: **Visualization results, such as those in Figure 3, should be compared with other baseline methods** to provide detailed insights into how CANDY's clustering performance or robustness visually differs from that of other methods, particularly in handling noisy correspondences.* As per your suggestions, we have supplemented the visualization comparisons in Figures S2 and S3 of the rebuttal PDF file, comparing our CANDY with the most competitive baseline DIVIDE [B]. As observed from Figure S2, DIVIDE struggles to differentiate between true and false positives during training, indicating overfitting on noisy correspondence. In contrast, Figure S3 demonstrates that our CANDY achieves a clear separation between true and false positives, highlighting its superior robustness in handling noisy correspondences. > ***Question 3**: **How is the pseudo target C defined in Equation 1** of the proposed method? Please provide a detailed explanation of its construction and role within the CANDY framework.* We apologize for the unclear description regarding the construction of the pseudo target $\mathbf{C}$. $\mathbf{C}$ serves as the target of the contrastive loss, which is defined as $$ \mathcal{L} = \sum_{v_1=1}^V \sum_{\substack{v_2=1 \\ v_2 \neq v_1}}^V \mathcal{H}\left(\mathbf{C}^{(v_1,v_2)}, \rho\left(\mathbf{Z}^{(v_1)}{\mathbf{Z}^{(v_2)}}^\top \right)\right), $$ where $\mathcal{H}$ denotes the row-wise cross-entropy function with mean reduction, $\rho\left(\cdot \right)$ denotes the softmax function, $\mathbf{C}^{(v_1,v_2)} \in \mathbb{R}^{n \times n}$ and $\mathbf{Z}^{(v_1)}{\mathbf{Z}^{(v_2)}}^\top$ represent the pseudo targets and affinity matrix between views $v_1$ and $v_2$, respectively. In general, traditional contrastive multi-view clustering methods assume that the cross-view correspondence is faultless, typically adopting an identity matrix $\mathbf{I}\in \mathbb{R}^{n \times n}$ as the target. As verified in our experiments, such a vanilla target not only misleads the model to overfit false positives but also neglects numerous semantically associated false negatives. Therefore, we propose the following formulation of $\mathbf{C}$ to achieve robustness against such a dual noisy correspondence problem: $$ \mathbf{C} = \widetilde{\mathbf{G}} + \varepsilon \mathbf{I}, $$ where $\varepsilon$ is a hyper-parameter set to 0.2 for all the experiments, and $\widetilde{\mathbf{G}}$ is the denoised pseudo target obtained through our context-based semantic mining and spectral-based correspondence denoising modules as elaborated in the manuscript. For your convenience, we attach the key steps for constructing the denoised target $\widetilde{\mathbf{G}}$ below. The pseudo target $\widetilde{\mathbf{G}}$ is obtained by $$ \widetilde{\mathbf{G}}^{(v_1 \rightarrow v_2)} = \mathbf{U} \mathbf{\widetilde{\Sigma}} \mathbf{V^{\top}}, $$ where $\widetilde{\mathbf{\Sigma}} = \mathrm{diag}(\lambda_1, \ldots, \lambda_L, 0, \ldots, 0) \in \mathbb{R}^{n \times n}$ is a diagonal matrix consisting of the retained singular values $( \lambda_1 > \cdots > \lambda_L \ge \eta)$, with a denoising hyper-parameter $\eta$ set to $0.2$ in our experiments. To obtain $\widetilde{\mathbf{\Sigma}}$, $\mathbf{U}$ and $\mathbf{V}$, we decompose $\mathbf{G}^{(v_1 \rightarrow v_2)}$ by Singular Value Decomposition (SVD) as $$ \mathbf{G}^{(v_1 \rightarrow v_2)} = \mathbf{U} \mathbf{\Sigma} \mathbf{V^{\top}}, $$ where $\mathbf{\Sigma} = \mathrm{diag}(\lambda_1, \ldots, \lambda_n)$ denotes a diagonal matrix consisting of the singular values, and columns of $\mathbf{U}$ and $\mathbf{V}$ are the left- and right-singular vectors, respectively. The cross-view high-order affinity graph $\mathbf{G}^{(v_1 \rightarrow v_2)}$ is defined as $$ \mathbf{G}^{(v_1 \rightarrow v_2)} = \mathbf{A}^{(v_1 \rightarrow v_2)} {\mathbf{A}^{(v_2 \rightarrow v_2)}}^{\top}, $$ where the affinity graph $\mathbf{A}^{(v_1 \rightarrow v_2)}$ from $v_1$ to $v_2$ is constructed by samples $\mathbf{Z}^{(v_1)}$ and $\mathbf{Z}^{(v_2)}$ as nodes in a minibatch, with edge weights defined by Gaussian kernel similarity. Specifically, we formulate the connection probability from one node to all others as a context, which serves as an embedding of the node $i$ for semantic mining, with $$ \mathbf{A}_{i j}^{(v_1 \rightarrow v_2)}=\exp (-\|[\mathbf{Z}^{(v_1)}]_i-[\mathbf{Z}^{(v_2)}]_j\|^2 / \sigma). $$ --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns; therefore, I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your recognition and positive assessment of our work. We sincerely appreciate your time and effort.
Rebuttal 1: Rebuttal: Dear ACs and Reviewers, We sincerely appreciate your time and effort in reviewing our paper and providing constructive feedback. We thank the reviewers for your recognition of our novelty and contributions. * This method serves as a plug-and-play module that can be integrated into other contrastive MVC methods [6oUm, G6Ay] * The proposed method consistently outperforms state-of-the-art multi-view clustering methods on various benchmarks [jTED, L9C1, G6Ay]. * This paper is well-written and structured [6oUm]. In the following sections, we have addressed each concern and query raised by the reviewers during the rebuttal phase. We have also included a **PDF document with additional figures** to complement our responses. Thank you again for your valuable feedback and time investment. Best regards, The Authors Pdf: /pdf/12274b4ee9c46874ec04fe20dbc75f388c5eff25.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Why Go Full? Elevating Federated Learning Through Partial Network Updates
Accept (poster)
Summary: The authors observe 'layer mismatch' phenomenon in federated learning, and propose FedPart that uses partial network updates to address the former issue. They show that FedPart outperforms previous methods and also reduced communication and computation overhead. Strengths: The proposed idea is neat. The demonstrated performances in experimental section are (surprisingly) good. Weaknesses: 1. The 'layer mismatch' term does not convince me. The authors try to use Figure 1(a) to show that the 'update stepsize' (to be clarified later) increases after each averaging and thus it indicates layer mismatch, I don't fully understand the logic here, in particular why does this draw the conclusion that it is caused by 'inadequate cooperation among layers'? I do see there is a comparison in Fig1(b), but on one hand it does not differ that much (except for the 50 to 60 iterations region), on the other hand the logic that this is due to 'layer mismatch' is still not perfectly sound even if Fig1(b) shows greater difference. Are there other evidences or maybe related observations in previous literature to support this claim? 2. The proposed strategy to select trainable layers does not provide much insight. Before getting to that section, I was expecting some unique or more informative way to choose what layer to update. It seems like the proposed strategy is the first thing anyone would try, thus I would like to see some argument/support for the effectiveness of choosing this strategy. 3. The experiment setup, if I understand correctly, involves 5 rounds of full network training between each cycle, which I assume is a lot given that partial update is about 8 or 18 rounds per cycle (for Resnet-8 and Resnet-18 respectively)? This is where I concern about the impact of these full network trainings. Since the amount of full network trainings is significant in FedPart, one cannot confidently tell if it is really the partial network trainings that are truly beneficial. 4. Regarding communication cost, I might argue that most communication cost are overhead in each communication, which I believe sharing 1/8 of the number of parameters BUT requiring more communication rounds is not necessarily saving communication cost. And my intuition is that only updating one layer at a time increases the total number of communication rounds? Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What exactly is the update stepsize in Fig1? Is it the learning rate? If not, what is it? I might have missed the exact definition of it so I would appreciate if the authors can clarify it here. 2. The comparison in Fig1b really does not look like a big difference to me, for some iterations the partial update actually has a slightly bigger swings? And also I don't understand why Fig1a does not match Fig1b in terms of the blue curve, shouldn't they both be full network update and should be the same? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewers' comments. Next, we will address each of the issues raised one by one. **W1 & Q1: What exactly is the update stepsize in Fig1? ... The authors try to use Figure 1(a) to show that the 'update stepsize' increases after each averaging and thus it indicates layer mismatch, ... why does this draw the conclusion that it is caused by 'inadequate cooperation among layers'?** We address W1 and Q1 together as they are closely related. In this paper, the step size refers to the magnitude of model parameter changes between iterations. Since the learning rate is fixed in this experiment, this value directly reflects the norm of the gradient. Therefore, during a normal neural network training process, the step size gradually decreases towards zero, leading the model to converge to a fixed point. However, we found that in federated learning, the step size significantly increases after each parameter averaging. This increase in step size suggests that while the global model parameters might be generally robust and contain more knowledge, they cannot cooperate well to complete local tasks immediately (i.e., there is a parameter mismatch). Additionally, because the gradient calculation follows the backpropagation algorithm, the gradient of a parameter in one layer is only directly related to the parameters in the subsequent layers, with no direct correlation among parameters within the same layer. Therefore, we believe this mismatch mainly exists between layers and we refer to this phenomenon as layer mismatch. **W2: The proposed strategy to select trainable layers does not provide much insight ...** We appreciate the reviewer's comments. In fact, although our method seems simple and intuitive, it is grounded in significant reasoning. The Sequential Training principle addresses the observation that neural networks converge shallow layers before deeper ones, a trend that is even more pronounced in federated learning [1]. From this perspective, in current federated learning, clients need to train deep networks without the shallow layers being fully converged, resulting in wasted computational power. Our sequential training strategy can effectively address this issue, but can lead to significant degradation in neural network performance. Therefore, we proposed another principle, Multi-round Cycle Training, which is inspired by the classic optimization method BCD (Block Coordinate Descent). By repeating cycles of training multiple times, it effectively alleviates the performance degradation caused by layer-wise training. In addition, before arriving at this final solution, we tried many different approaches, such as various parameter selection strategies (both randomly, reversely) and optimizer designs, some of which are presented in the ablation studies. Furthermore, while the final method's design is relatively simple, it still involves many details, such as determining the timing for full parameter updates and balancing the length of each cycle with the total number of cycles. **W4: Regarding communication cost, ... (does) only updating one layer at a time increases the total number of communication rounds?** We will first address the reviewer's fourth weakness, as we believe the third point will be better addressed after clarifying this one. The reviewer misunderstood our experimental results and our method does NOT increase the required number of communication rounds. In all experimental tables, we ensure the number of communication rounds is identical when comparing full parameter and partial parameter training. For example, in Table 1, the row with C=1 shows the performance of each method at the round when partial network training completes the first cycle, ensuring a fair comparison across all algorithms. **W3: The experiment setup ... involves 5 rounds of full network training between each cycle, which I assume is a lot ... Since the amount of full network trainings is significant in FedPart, one cannot confidently tell if it is really the partial network trainings that are truly beneficial.** The reviewer's understanding of the experimental setup is generally correct, but we do not agree with the skepticism. Based on the response to weakness 4 above, our method does not increase the communication rounds. Therefore, from an equivalent perspective, in the process where all rounds involve full parameter training, we modified some rounds to partial parameter training and observed significant performance improvements. Logically, this clearly indicates that partial network updates contributed to the improvement. **Q2: The comparison in Fig1b really does not look like a big difference to me ... And also I don't understand why Fig1a does not match Fig1b in terms of the blue curve** Regarding the experimental results in Fig. 1b, the reviewer felt the effect was not significant. We respect the different opinions raised by the reviewer. However, we believe that, at least it is fair to say that FedPart can significantly reduce the degree of layer mismatch in extreme cases and generally maintain the original effect in typical cases, which still supports our claim. Concerning the reviewer's comment on the difference in the full parameter training curve, we believe there is some misunderstanding. Indeed, the blue curves in Fig 1a and Fig. 1b are NOT calculated in the same way. As the step size is the total magnitude of model updates, if not all dimensions are updated, this magnitude will naturally decrease, but this is an unfair comparison. To make the comparison as fair as possible, the full parameter training curve in Fig. 1b actually shows the "update magnitude of the same parameters as those updated in partial parameter training," ensuring a fair comparison. We apologize for the confusion and will clarify it in a later version. [1] Unlocking the Potential of Federated Learning for Deeper Models. --- Rebuttal 2: Comment: Dear Reviewer 6JAi, Could you please respond with how you think about the authors' response? Please at least indicate that you have read their responses. Thank you, Area chair --- Rebuttal 3: Comment: I have read the authors' responses and discussions with other reviewers. I believe the authors have addressed some of my concerns (e.g. W3 W4) and questions, so I will update my score. However, I still have some concerns about the vaguely proposed term 'layer mismatch', and the authors' responses do not convince me. I asked if there are other evidences or related observations in previous literature to support the claim, but authors did not answer it. This skepticism remains also due to the lack of explanation about Fig 1(b) where I asked why the difference only seems big for the 50 to 60 iterations region but not else where. Considering that Fig 1(b) seems to be the only experimental support for the claim, not addressing questions regarding it makes the 'layer mismatch' claim unsure to me. --- Rebuttal Comment 3.1: Comment: Dear Reviewer, We greatly appreciate your positive and insightful feedback. As the first to introduce and explore the concept of "layer mismatch," we recognize that there is still room for improvement, especially in developing a deeper understanding of the underlying reasons behind this phenomenon. We are committed to further enhancing our work by strengthening the foundation of our findings with both theoretical and empirical evidence. Thank you once again for your feedback.
Summary: In this paper, the authors suggest a new approach for the network update step in federated learning. Considering that traditional federated averaging updates and aggregates all parameters at once, leading to a divergence between the global model and the local solution to a client’s specific task, they simply propose to only update parts of the network. They show that the layer mismatch between parameters learned locally and their corresponding global version is largely alleviated through this approach. [Edit after rebuttal: raised score to accept] Strengths: This paper is well written and the main idea, which seems well motivated and interesting, is communicated clearly. The experiments are extensive and compare across multiple methods, metrics, and ablations. Weaknesses: There are two main weaknesses, (1) the suggested layer-wise update means that there is only one (linear) weight matrix that is changed between the global and local models through learning on the local data, which seemingly makes the models much *more* vulnerable to data reconstruction compared to an update of the full model. After all, we only need to reconstruct data from a change in a linear map, rather than a change of a deep, non-linear model. While the authors show that for some specific attacks that were tailored for deep model architectures the approach does fine, a simple attack based on the change in the single layer might be much more effective. At least a discussion on this matter would be important. (2) The motivation and initial results seem convincing, yet the experimental setup is counter-intuitive: you “insert five rounds of full network training between each cycle in […] FedPart”, which means you do more *full* training than partial updates. This seems strange, the motivated layer mismatch should happen consistently in every round. Maybe this is a misunderstanding, a clarification is certainly needed. There are several other issues raised in the Question below, however, I do believe the paper has value and I will (significantly) raise my score if the concerns are answered. Technical Quality: 3 Clarity: 4 Questions for Authors: Major: - In Figure 1, the update step size after aggregation seems extremely sharp. I was wondering if the momentum terms (or higher order statistics) of the optimizer were properly reset once the client received the global set of parameters? I briefly checked the submitted source code and it seems the reset of Adam or other optimizers is missing, maybe I overlooked it. - In Section 3.3 could you please comment on the assumption and put into perspective how realistic they are? A brief discussion on the assumptions would be great – taking them as they are just because they are in the literature feels a bit rough. - Regarding the communication costs, how do clients know the mask $S^t$? It seems that this is missing from the analysis. - In Table 6, the ablation on warm-up rounds, it seems that the larger (60 round) warm-up performs significantly better than what was reported before (in Table 2). How does table 2 look when run with longer warm-up? What is the reason not to do that? - In the section on the warm-up ablation you mention that FedPart still enhances the model’s accuracy even after so many warm-up rounds (second to last sentence). Where can I see this in the presented numbers? - According to the paper checklist you provide repetitions to assess experimental statistical significance. This is not given for Table 6-9, which is in my opinion fine for these ablations, but the checklist argument should hence be No. You can add in the description that you assessed the significance for the main experiments with an indication which one these are. Minor: - In Figure 2, it is a bit unclear to me why the layers have these different shapes, could you please clarify? - The activation maximization approach is extremely old and yields arguably useless results. There is more than a decade of advancements in this field, with MACO [1], yielding great visual representations that would be much more useful in accessing the meaning. [1] https://arxiv.org/abs/2306.06805 Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: There is almost no discussion of current limitations of the method at hand. The manuscript would benefit from a slightly more critical discussion in the future. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewers' comments. Here are our responses. **Q1: In Figure 1, the update step size ... seems extremely sharp ... if the momentum terms ... were properly reset?** We did reset the optimizer's state before the start of each local round. The relevant code is at line 131 in the file `fling/component/client/base_client.py`. **Q2: In Section 3.3 ... please comment on the assumption and put into perspective how realistic they are.** Thank you for the suggestion. Here are more detailed comments about the assumptions. Assumptions 1 means that the gradient of the loss function does not change too abruptly as we move in the parameter space. This is reasonable for most models, where the loss changes gradually as the parameters are adjusted. Assumption 2 suggests that the variability in the gradient estimates is controlled. This is generally true in datasets without much extreme values, where individual data points do not drastically affect the overall gradient. Assumption 3 means that the effect of applying different masks to the variability of gradient should be similar. As this assumption is first proposed in this paper, we next give some evidence. We conducted Monte Carlo simulations (10,000 samples) to approximate the value of $k$ in Assumption 3. As is shown the the table below, $k$ is close to 1 under different settings, supporting Assumption 3. | | 0% Training (Random initialized) | 50% Training (Intermediate) | 100% Training (Fully trained) | | ------ | ------- | ------ | -------- | | ResNet-8 | 1.09 | 1.13 | 1.13 | | ResNet-18 | 1.08 | 1.18 | 1.17 | **Q3: Regarding the communication costs, how do clients know the mask ?** Actually, the server does not need to transfer the mask $S^t$ to each client. Since the trainable parameters are determined at the *layer* level, the server only need to transfer the *indexes* of trainable layers, resulting in almost no communication overhead. Such a notation is just for easier formulation. **Q4: ... the larger (60 round) warm-up performs significantly better ... How does table 2 look when run with longer warm-up? What is the reason not to do that?** *Extending the warm-up period leads to higher final model accuracy but also increases computational and communication overhead*. For instance, to achieve extreme performance, one might need to train with FedAvg until full convergence before adding several rounds of partial network training. The reason why we did not use such a setting in Table 2 is that, we *want to show a balanced improvement for both model accuracy and system overhead*. **Q5: In the section on the warm-up ablation ... FedPart still enhances the model’s accuracy after so many warm-up rounds. Where can I see this in the presented numbers?** The relevant figures can be found in Table 6. In the table, *bef.* refers to the accuracy before performing partial network updates (at the end of the warm-up stage), and *aft.* refers to the accuracy after partial parameter training. For example, the accuracy after full parameter training at round 60 in CIFAR-10 is 58.92, while after one cycle of partial parameter training, this number become 66.18. **Q6: ...repetitions to assess experimental statistical significance is not given for Table 6-9 ...** We appreciate the reviewer’s suggestions. We will amend the checklist arguments accordingly. **Q7: In Figure 2, it is a bit unclear to me why the layers have these different shapes** Sorry for the confusion. In Figure 2, we used different shapes to represent different network parameters. Misalignment of shapes between layers indicates a mismatch, while aligned shapes indicate a match between layers. **Q8: The activation maximization approach is old ..., with MACO, yielding great visual representations that would be much more useful in accessing the meaning.** We appreciate the suggestion. Due to time constraints, we cannot provide an improved visualization immediately but will adopt modern methods in future versions. We next address the concerns raised in the "Weakness" section. **W1: ... (the proposed method) seemingly makes the models much *more* vulnerable to data reconstruction compared to an update of the full model ... a discussion on this matter would be important.** Thanks for the comment. Compared to full-parameter training, we believe our proposed method will not increase the risk of privacy leakage. We can abstract the model training process (for both full and partial parameter training) as a mapping: $ (\Delta w_1, \Delta w_2, ..., \Delta w_n) = f(x) $, where the left hand side denotes the updates to each model parameter, and $ x $ is the training data. From a privacy attack perspective, the goal is to find the best $ x $ such that the updates to $ w $ are *as close as possible* to the actual updates in each dimension. This resembles solving a system of equations, where $ x $ are the unknowns, and each dimension of $ w $ update represents an equation. With partial network training, the unknowns $ x $ remain unchanged compared with full parameter training, but the number of equations decreases (i.e., less information for the attack to follow). Therefore, we believe partial network training leakages less information in general. **W2: ... the experimental setup is counter-intuitive: you “insert five rounds of full network training between each cycle in FedPart”, which means you do more *full* training than partial updates ...** There may be a misunderstanding. A "round" refers to one communication between the server and clients, while a "cycle" refers to the entire process of training a partial network from shallow to deep layers, including many rounds. For instance, in ResNet-18 with $R/L=2$, one cycle includes 36 rounds, indicating far more partial-parameter training than full-parameter training. --- Rebuttal Comment 1.1: Title: Response rebuttal Comment: Thank you for your response. I appreciate all answers, they clarify many details that strengthen the paper, and I would like to see these additional discussions (esp. on the assumptions and privacy) covered in the final version of the manuscript. I raise my score to accept and wish the authors best of luck for the remaining review process. --- Rebuttal 2: Comment: Thank you for your positive feedback. We appreciate your helpful comments and we will make sure to include the additional discussions on assumptions and privacy in the final manuscript.
Summary: This paper discovered the layer mismatch challenge in federated learning due to the full network update. To mitigate this challenge, the authors proposed the FedPart method. Specifically, the FedPart method would ask the clients do full network update in the beginning communication rounds, and then it would ask each client to do only train one layer in each round from the shallowest to the deepest. The experiment results indicate that the proposed method is better than the full network FL baselines. Strengths: 1. The paper proposed a novel observation of layer mismatch due to the full model training in federated learning. 2. The single-layer sequential training is communicationally and computationally efficient to the edge devices. Weaknesses: 1. I suggest the author provide more evidence and discussion of the proposed layer mismatch challenge in the paper. In the current version, there is no experiment to validate the proposed challenge and how this challenge happens during the training. 2. In the experiment part, the paper mainly focused on the iid setup. Even in the ablation study (line 273), the paper adopts alpha equals 1 as the non-iid setup. In most of the FL papers, the experiment would adopt an alpha equal to at least 0.1 to simulate the extreme non-iid cases. As a result, the experiment in the paper could not provide useful support to the authors' argument. Also, the client number is very limited (only 40 clients), and the experiment does not contain any client sampling in the discussion. 3. Even though the paper proposed many related studies, the experiment part contains no other layer-wise FL training baselines. As a result, it is really hard to judge whether the proposed method could stand for the SOTA performance. 4. I noticed that some recent papers also proposed similar layer-wise training methods [1,2]. I suggest the author to include them as baselines to compare with. 1. Zhang, Tuo et al. “TimelyFL: Heterogeneity-aware Asynchronous Federated Learning with Adaptive Partial Training.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2023): 5064-5073. 2. Lee, Sunwoo et al. “Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training.” IEEE Transactions on Mobile Computing (2024): n. pag. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the weakness above. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please see the weakness above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewers' comments. Below, we address the concerns raised in the "Weakness" section. **W1: I suggest the author provide more evidence and discussion of the proposed layer mismatch challenge in the paper. In the current version, there is no experiment to validate the proposed challenge and how this challenge happens during the training.** We conducted experiments to verify the issue of layer mismatch, as shown in Fig. 1. Here we detail the logic of these experiments. In Fig. 1a, we observed that the step size significantly increases after each model averaging, which is detrimental to the final model convergence. We believe this phenomenon is caused by layer mismatch, and thus we designed a method targeting reducing layer mismatch (i.e., layer-wise training). The results show that after using our method, the step size variation is significantly reduced (as shown in Fig. 1b), and the final performance is improved (as shown in Table 1). Therefore, we believe that layer mismatch does exist, affects model convergence, and consequently impacts federated learning performance, which is the core challenge addressed. Reviewers might have questions about the relationship between step size and layer mismatch, so we provide a more detailed explanation here: Formally, step size resembles the *norm of the gradient* in each model update. In a normal neural network training process, the step size gradually decreases towards zero during training, leading the model to converge to a fixed point. However, in federated learning, the step size significantly increases after each parameter averaging (Fig. 1a), indicating that the global model parameters do not collaborate well to complete local tasks (i.e., there is a parameter mismatch). Additionally, because the gradient calculation follows the backpropagation algorithm, the gradient of a parameter in one layer is only directly related to the parameters in the subsequent layers, with no direct correlation among parameters within the same layer. Therefore, we believe this mismatch mainly exists between layers (i.e., layer mismatch). **W2: In the experiment part, the paper mainly focused on the iid setup. Even in the ablation study (line 273), the paper adopts alpha equals 1 as the non-iid setup. In most of the FL papers, the experiment would adopt an alpha equal to at least 0.1 to simulate the extreme non-iid cases. As a result, the experiment in the paper could not provide useful support to the authors' argument. Also, the client number is very limited (only 40 clients), and the experiment does not contain any client sampling in the discussion.** Thank you for the comments. We conducted additional experiments with 150 clients, randomly sampling 20% of the clients for training and aggregation in each communication round. The results are shown below, indicating that FedPart still performs better than FedAvg in this scenario. | Dataset | Cycle | FedAvg | FedPart | | :-----------: | :---: | :----: | :-------: | | CIFAR-10 | 1 | 55.18 | 58.95 | | | 2 | 60.82 | 63.22 | | | 3 | 61.50 | 63.22 | | | 4 | 64.34 | 66.08 | | | 5 | 65.00 | **67.08** | | CIFAR-100 | 1 | 30.82 | 31.96 | | | 2 | 34.82 | 37.13 | | | 3 | 39.36 | 37.13 | | | 4 | 39.64 | 41.00 | | | 5 | 40.55 | **42.12** | | Tiny-ImageNet | 1 | 17.34 | 17.83 | | | 2 | 19.63 | 23.06 | | | 3 | 19.63 | 23.06 | | | 4 | 22.01 | 26.03 | | | 5 | 23.33 | **26.75** | We also added experiments with an alpha=0.1 setting as suggested by the reviewer. The results are as follows. It can be seen that partial network training still exhibits much faster convergence compared to full network training and achieves comparable accuracy to full parameter training. Here, it can be seen that in extreme data heterogeneity, we were unable to achieve accuracy improvements because the main issue in this setting is client drift rather than layer mismatch. Since our method is not specifically designed to solve data heterogeneity, in future work, we may introduce other solutions for data heterogeneity to improve performance further. | Dataset | C | FedAvg (FNU) | FedAvg (FedPart) | FedProx (FNU) | FedProx (FedPart) | | :------: | :--: | :----------: | :--------------: | :-----------: | :---------------: | | CIFAR-10 | 1 | 33.79 | 44.02 | 39.64 | 43.85 | | | 2 | 44.08 | 44.41 | 46.88 | 45.42 | **W3 & W4 & W5 & W6: Even though the paper proposed many related studies, the experiment part contains no other layer-wise FL training baselines. As a result, it is really hard to judge whether the proposed method could stand for the SOTA performance. I noticed that some recent papers also proposed similar layer-wise training methods [1,2]. I suggest the author to include them as baselines to compare with.** We appreciate the reviewers' comments. However, these works cannot be used as benchmarks for comparison, as they are designed for a significantly different scenario from ours. Their purpose in using layer-wise training methods is to enable effective training of networks with different architectures in each client. However, if their methods are applied in our scenario (i.e., the same model is used on all clients, and the computational capacity is consistent), these methods will degrade to full parameter training (i.e., FedAvg, which is compared in our paper). Therefore, we believe comparing with these methods is unnecessary. --- Rebuttal Comment 1.1: Comment: I am not convinced by the author's response on W2. Specifically, in your newly-added results, the performance between FedAvg and FedPart is very similar in the challenging dataset (CIFAR-100 and Tiny-ImageNet), and the performance is under the iid setup (alpha = 1) based on my understanding. Also, the provided non-iid test indicates that the FedPart nearly does not provide a performance upgrade compared to the FedAvg, the performance is so close when C equals 2. In addition, the author does not provide the non-iid test on the CIFAR-100 and Tiny-ImageNet. Based on the newly-added results, it would provide a implication that the proposed FedPart only works well on the easy dataset with iid setup. As a result, I would maintain my score. --- Rebuttal 2: Comment: Thank you for the reviewer's time and patience. However, we believe there are still some misunderstandings regarding this issue, so we would like to clarify further: **Q1: Specifically, in your newly-added results, the performance between FedAvg and FedPart is very similar in the challenging dataset (CIFAR-100 and Tiny-ImageNet), and the performance is under the iid setup (alpha = 1) based on my understanding.** Thank you for the reviewer’s feedback. In fact, *the performance difference between FedAvg and FedPart on challenging datasets is not similar*. As shown in the results, on CIFAR-10, CIFAR-100, and Tiny-ImageNet, the final accuracy achieved by our method is 67.08%, 42.12%, and 26.75%, respectively, while FedAvg achieves 65.00%, 40.55%, and 23.33%, respectively. This means our method outperforms FedAvg by 2.1%, 1.6%, and 3.4%, respectively. These show that the improvement on more challenging datasets is still substantial, even exceeding the gains on simpler datasets. Please note that in our first table provided during the rebuttal phase, "C" represents different training time points, and when comparing model performance, the final state (C=5) should be considered. We summarize the performance improvement of our method over FedAvg in three different scenarios, which is shown in the table below. It can be observed that our method generally shows greater improvements on more challenging datasets, which demonstrates the advantages of our approach. | Experiment Configurations | CIFAR-10 | CIFAR-100 | Tiny-ImageNet | | -------------------------------------------- | -------- | --------- | ------------- | | I.I.D | +1.9% | +1.9% | **+3.7%** | | Non-I.I.D. ($\alpha=1$) | +1.0% | +2.2% | **+3.4%** | | I.I.D. (client\_num=150, sampling\_rate=0.2) | +2.1% | +1.6% | **+3.4%** | **Q2: Also, the provided non-iid test indicates that the FedPart nearly does not provide a performance upgrade compared to the FedAvg, the performance is so close when C equals 2. In addition, the author does not provide the non-iid test on the CIFAR-100 and Tiny-ImageNet. Based on the newly-added results, it would provide an implication that the proposed FedPart only works well on the easy dataset with iid setup.** Thank you for the reviewer’s feedback. We have the following points to clarify: - First, the reviewer mentioned that the proposed FedPart only works well on the easy dataset with iid setup. For our understanding, the reviewer may take $\alpha = 1$ as iid or nearly iid. However, we would like to clarify that $\alpha=1$ represents a scenario with relatively significant data heterogeneity, and it is widely used in the literature [1,2,3,4]. In contrast, $\alpha = 0.1$ is a *extreme* non-iid setting, as the reviewer noted in the comment. Under the setting of $\alpha=1$, as shown in Table 4 of the main text, our method achieves final performance improvements of +1.0%, +2.2%, and +3.4% on CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively. These performance gains are not negligible. - We agree with the reviewer that, in this extreme non-IID scenario ($\alpha = 0.1$), the model accuracy of our method is roughly on par with that of the full parameter method. However, this does not imply that FedPart offers no performance advantages—the benefits primarily arise from *reduced communication and computation costs*. The results indicate that FedPart can achieve similar accuracy to FedAvg while significantly reducing communication and computation costs (these metrics are consistent with those observed in the IID scenario). As shown in Table 1, when training on Tiny-ImageNet, FedPart reduces communication overhead by 72% and computation overhead by 27%. Therefore, we believe that even in such an extreme scenario of data heterogeneity, our method still holds practical value. The above is our further response to the reviewer's comments. Thank you very much for the reviewer's patience, and we hope that if there are any further questions, the reviewer can feel free to ask us directly. [1] Xu, Jian, Xinyi Tong, and Shao-Lun Huang. "Personalized federated learning with feature alignment and classifier collaboration." International Conference on Learning Representations (ICLR). 2023. [2] Oh, Jaehoon, Sangmook Kim, and Se-Young Yun. "Fedbabu: Towards enhanced representation for federated image classification." International Conference on Learning Representations (ICLR). 2022. [3] Tan, Yue, et al. "Federated learning from pre-trained models: A contrastive learning approach." Advances in neural information processing systems 35 (2022): 19332-19344. [4] Zhang, Jianqing, et al. "Fedala: Adaptive local aggregation for personalized federated learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 9. 2023. --- Rebuttal 3: Comment: Dear Reviewer, Thank you once more for the time and effort you invested in reviewing our paper, as well as for your insightful follow-up feedback regarding the non-iid setting. In response to your comments, we have made several clarifications, which we hope address at least some of the concerns you raised. We acknowledge that there is still room for improvement under the extreme non-iid setting (e.g., $\alpha=0.1$). Please let us know if there are any additional questions or if further clarification is needed.
Summary: The paper proposes a new and novel method to partially train networks to achieve better training efficiency but also, in some cases, better performance. Strengths: Training efficiency is an extremely important and timely topic. Given that FL aims to have massive networks to train upon any efficiency gains are exacerbated due to the network size. This work, in my opinion, has the following strengths, - The idea is simple however it is executed well and the intuition behind it is sound. - The paper can be fully reproduced as the code and datasets are provided. - The experiment section is thorough and has sufficient experiment to validate the authors' claims. Weaknesses: As the authors claimed, while the experiments are sufficient - they are rather limited in terms of dataset sizes given the target applications. Further, synthetic data experiments are missing - I would have personally appreciated to see how FedPart performs in the setting of both IID and non-IID data. Technical Quality: 3 Clarity: 3 Questions for Authors: Based on my comments above, I would like to ask the following questions, - How would the method perform on IID and non-IID data? - Would it be possible to add some experiments regarding the previously mentioned question? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have sufficiently addressed limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the reviewers' comments. Here are our responses. **Q1: How would the method perform on IID and non-IID data?** Thank you for the reviewer's comment. We have conducted additional experiments to enrich our analysis of non-IID data scenarios. We added experiments with an alpha=0.1 setting as data heterogeneity is more severe. The results are as follows. It can be seen that partial network training still exhibits much faster convergence compared to full network training while maintaining the final accuracy. This further shows the robustness of our proposed FedPart under data heterogeneity. | Dataset | C | FedAvg (FNU) | FedAvg (FedPart) | FedProx (FNU) | FedProx (FedPart) | | :------: | :--: | :----------: | :--------------: | :-----------: | :---------------: | | CIFAR-10 | 1 | 33.79 | 44.02 | 39.64 | 43.85 | | | 2 | 44.08 | 44.41 | 46.88 | 45.42 | **W1: ... limited in terms of dataset sizes ...** We acknowledge your concern regarding the dataset sizes used in our experiments. As we mentioned in the paper, we did not use particularly large datasets to test our method. However, the datasets we used are comparable in size to those used in related works in the field [1, 2], and we believe that these datasets are sufficient to demonstrate the efficacy of FedPart. We will aim to incorporate more extensive datasets in future iterations of our research. **W2: ... synthetic data experiments are missing ...** We appreciate your suggestion to include synthetic data experiments. This is indeed a worthwhile addition that can provide more insights into the performance of FedPart in various settings. However, due to time constraints, we cannot immediately incorporate synthetic data experiments. We will seriously consider adding synthetic data experiments in both IID and non-IID settings in future versions of our work. [1] McMahan, Brendan, et al. "Communication-efficient learning of deep networks from decentralized data." Artificial intelligence and statistics. PMLR, 2017. [2] Li, Qinbin, Bingsheng He, and Dawn Song. "Model-contrastive federated learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. --- Rebuttal Comment 1.1: Title: Read your rebuttal Comment: Thank you for the clarifications provided. Taking in account the rest of the reviews/responses thus far, I will be keeping my score as is. I wish the authors the best of luck with the final decision process. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and for taking the time to consider our clarifications. We greatly appreciate your positive feedback and helpful comments.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
Accept (poster)
Summary: This paper identifies that the KD-based method which is used to tackle the data heterogeneity becomes more vulnerable under the model poisoning attacks. Moreover, the models unknowingly align benign client models with a poisoned server model in a malicious setting. It is also called attack amplification. To address this phenomenon, HYDRA-FL is proposed by conducting KD-loss at the shallow layer via an auxiliary classifier. Strengths: Clear motivation with the case study to present the observation on “model alignment process inadvertently forces local models to align its representation/predictions to the poisoned global model amplifying the attack’s impact”. It is easy to follow up and the HYDRA method is simple to implement which is very important. Weaknesses: - Only two methods are implemented with HYDRA can not convince me that this method is efficient enough. As these two methods are also outdated, it is better to implement this simple modification to more KD methods. - Depth analysis of experimental observation is needed to provide more insights and inspiration. Technical Quality: 3 Clarity: 3 Questions for Authors: - In FL Settings, Setting a sampling ratio equal to 1 across 10 clients in MOON is not a realistic scenario. - Why only test $\alpha=0.5, 0.1, 0.05$ in the CIFAR-10 dataset, Whats the performance in the other two datasets? - How to explain that the performance gain in a larger $\alpha$ scenario under adversarial setting is slightly low than in smaller $\alpha$. e.g., In Table 1, CIFAR 10 $\alpha=0.05$FedNTD→HYDRA-FL 21.72 to 25.15, $\alpha=0.5$FedNTD→HYDRA-FL 52.51 to 52.57. This phenomenon can also be observed in Table 2 and Figure5. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1- Two methods:** We appreciate the reviewer's concern regarding our evaluation of HYDRAFL on only two existing methods and agree that this is a limitation. We chose these two methods as initial test cases to demonstrate the *attack amplification* phenomenon we discovered, to motivate the need for HYDRAFL, and then to show the adaptability of HYDRAFL on two KD-based techniques. Due to the large number of existing techniques, it was not feasible to include an exhaustive evaluation within the scope of one paper. However, we conducted a wide range of experiments on these two chosen techniques to show the effectiveness of our technique and the impact of the choice of different parameters. **W2 - Depth Analysis of Experiments:** We agree that an in-depth analysis is needed. HYDRAFL can be tested with various experimental setups, including various aggregation rules, attacks, defenses, datasets, data modalities, data distribution types, data heterogeneity levels, number of clients, client sampling ratios, number of malicious clients, etc. While evaluating all possible combinations would be ideal, we, unfortunately, cannot fit everything in one paper while considering our limited computational resources. We prioritized certain setups based on feasibility and resource availability. Here, we highlight a few points regarding our experimentation: - Although MOON and FedNTD are KD-based FL techniques, they use different approaches. FedNTD penalizes prediction divergence measured through distillation loss, while MOON penalizes representation divergence measured through contrastive loss (Section 3). This shows that attack amplification is not limited to one single technique. We chose two model poisoning attacks (Appendix C.2): Dyn-Opt for FedNTD and Stat-Opt for MOON. We chose two different attacks to show that attack amplification is not unique to a single attack but is rooted in the fundamental nature of KD. - The sampling ratios for MOON and FedNTD are 1 and 0.1, respectively. HYDRAFL performs well in both cases; therefore, it is safe to assume that it does not rely on the sampling ratio. - Since we cannot cover the entire space of KD algorithms, we propose our solution (Page 6, Equation 4) as a generic formulation that can be inserted over the standard KD equation (Page 3, Equation 3). We have provided the well-documented code, which requires 1) modifications to the model architecture and the loss function equation and 2) applying an attack and a defense in the training code in the original paper's repository. This demonstrates the ease of implementation for any future work. - We perform ablations across different heterogeneity values, choice of shallow layer, and distillation coefficients. We do this for both FedNTD (Page 9, Figure 7) and for MOON (Appendix E.2, and we ran some new experiments in response to your Q2). Further explanations for some of our experimental settings are addressed in response to the questions below. **Q1- Sampling ratio:** We thank the reviewer for raising this point about realistic scenarios. We agree that a sampling ratio of 1 with 10 clients is not realistic for MOON. Our goal was to show attack amplification in settings where MOON/FedNTD performs well, using the original paper’s settings for fair comparison. The FedNTD paper uses a sampling ratio of 0.1 (100 clients), while MOON selects all 10 clients. We followed the original paper's hyperparameters to effectively demonstrate the problem and solution. We will update our Limitations and Future Works Section to include large-scale analysis, especially with the language modality, and move the Limitations section to the main paper. **Q2- Heterogeneity settings in other datasets:** HYDRAFL is built upon the MOON and FedNTD, using the same hyperparameters unless otherwise specified. These specified alpha values are within the tested range of the original works, ensuring the integrity and comparability of our results. To demonstrate robustness and generalizability, we conducted more experiments now on MNIST at a much lower heterogeneity of 0.05. Due to time and computational constraints, we could not add more datasets and heterogeneity combinations for this rebuttal. | MNIST (heterogeneity=0.05) | no attack | attack | |-----------------------------|-----------|--------| | FedAvg | 69.2 | 59.36 | | MOON | 73.57 | 55.32 | | HYDRAFL shallow layer1 | 76 | 57.4 | | HYDRAFL shallow layer2 | 75.15 | 57.71 | The above table shows attack amplification and the effectiveness of our solution. MOON's accuracy drops from 73.57\% (no-attack) to 55.32\% (attack), showing attack amplification compared to FedAvg. HYDRAFL achieves higher accuracy than both FedAvg and MOON in the no-attack setting and higher accuracy than MOON in the attack setting, reducing attack amplification. Additional experiments for $\mu = 1, 0.3, \textrm{and } 0.1$ are shown in Appendix E.2. **Q3- Performance gain lower under low heterogeneity:** We thank the reviewer for pointing this out. We apologize that we should have explained the 'performance gain' vs. heterogeneity relation in the paper and plan to include it in the final version. A low $\alpha$ corresponds to high heterogeneity. It is like an out-of-tune car engine where we can make significant improvements. The same is true with model updates in high heterogeneity, where they drift apart a lot, so there is more room for improvement. Conversely, a low-heterogeneity setting is like an already-tuned engine with low-performance improvements. For example, for MOON (Table 2), at $\alpha=0.1$, MOON $\rightarrow$ HYDRAFL, accuracy goes from $39.9 \rightarrow 43.6$, while at $\alpha=5$, accuracy goes from $67 \rightarrow 68.4$. This explanation will be included in the final version in detail. --- Rebuttal Comment 1.1: Comment: Thanks to the efforts during this period. My concerns have been addressed. I update my score.
Summary: This paper investigated the phenomenon termed *attack amplification* in federated learning with Knowledge Distillation (KD) and proposed an FL framework named HYDRA-FL to reduce the impact of poisoning client attacks in FL. An auxiliary classifier is introduced to employ KD loss on the shallower layer and reduce the effect of KD on the final layers of the model. Experiments on the benchmark dataset demonstrated the performance gain of the proposed method under both clean and poisoning settings compared with a few other FL methods. Strengths: \+ Showcased a crucial threat of KD, a commonly used technique in federated learning. \+ Provided a unified solution for robust KD that is agnostic to most FL frameworks. \+ Nice ablation study to validate the effects of different components in the proposed algorithm. Weaknesses: \- Practicality: It seems that this algorithm hinges on careful choices of hyper-parameters of $\beta$, $\mu$, $\gamma$. Sensitivity analysis against drastic changes in these hyper-parameters should be provided. Fig 7 is not sufficient enough. \- Experiments: only two FL+KD methods were analyzed, whereas there are many other related work that employs KD in FL, such as methods that leverage proxy datasets [1]. \- Related work needs to be more comprehensive. For instance, there has been work that discussed KD in FL with poisoned teacher models [2]. Experimental comparison or at least a discussion of work along this line is expected. --- References: [1] Lin, Tao, et al. "Ensemble distillation for robust model fusion in federated learning." *Advances in neural information processing systems* 33 (2020): 2351-2363. [2] Hong, Junyuan, et al. "Revisiting data-free knowledge distillation with poisoned teachers." *International Conference on Machine Learning*. PMLR, 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: * The proposed method seems to focus on model poisoning attacks. I wondered how this algorithm would perform when the local client updates are from models with poisoned data (such as flipped labels). * Authors mentioned that 'completely removing the KD-loss at the output layer may cause a more negative impact than keeping it in a reduced form'. What could be the reasons/implications behind this phenomenon? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1- Sensitivity analysis:** We agree with the reviewer that sensitivity analysis is crucial. Therefore, in addition to Figure 7 (FedNTD), we explain Table 4 (MOON) in Appendix E.2 since it is an ablation over the different shallow layers, diminishing factors, and attack/no-attack settings. We clarify that in Appendix E.2 we meant to say ``lower effective value of $\mu$'', where effective $\mu$ is $\frac{\mu}{b}$ and 'b' is the diminishing factor. We chose hyperparameters over validation sets. The table below shows the impact of decreasing *effective $\mu$* (or increasing the diminishing factor 'b'). We see that in the no-attack setting, a high effective $\mu$ (low 'b') results in higher accuracy, as a higher diminishing factor reduces the distillation impact (Section 4.1). Conversely, in the attack setting, we see the opposite trend. This is because the diminishing factor reduces the impact of distillation on the final layer, thereby reducing attack amplification, as can be seen by the accuracy drop. | **Method** | **$\mu$** | **no-attack** | **attack** | **accuracy drop** | |--------------|-----------|---------------|------------|-------------------| | HYDRA-FL s1 | 1 | 94.41 | 68.68 | 25.73 | | HYDRA-FL s2 | 1 | 91.78 | 68.13 | 23.65 | | HYDRA-FL s1 | 0.3 | 92.03 | 72.35 | 19.68 | | HYDRA-FL s2 | 0.3 | 92.92 | 73.55 | 19.37 | | HYDRA-FL s1 | 0.1 | 92.04 | 76.65 | 15.39 | | HYDRA-FL s2 | 0.1 | 93.93 | 72.54 | 21.39 | **W2- Other methods:** We thank the reviewer for mentioning other works, and we will expand on related works in the final version of the paper. [Tao et al., 2020] was primarily designed to address model heterogeneity in addition to data heterogeneity. Hence, they came up with model fusion on the server side using knowledge distillation with auxiliary data. Briefly, this fusion step involves initializing a student server model by aggregating the client models of different heterogeneous groups and then using an auxiliary dataset with each group to distill information into the server model. Our focus is on **uncovering issues arising from data heterogeneity**, which makes it difficult for fusion models to determine whether attack amplification is caused by data or model heterogeneity. However, exploring how model heterogeneity impacts attack success is an intriguing direction for future research. **W3- Related Work:** We agree that a discussion comparing our work with [Junyuan et al. 2023] is necessary, and we will include it in the appendix in the final version of our paper. *Similarities:* [Junyuan et al. 2023] is also a mitigation technique against poisoned teachers. They also try to find out if poison is passed from the teacher to the student through the distillation process. To prevent this passing of poison, they have designed an anti-backdoor technique. Their technique also works by suppressing the poison and mitigating the performance degradation caused by *bad supervision*. They name these two stages of their defense mechanism as Shuffling Vaccine (SV) and Self-Retrospection (SR). *Dissimilarities:* However, there are two key differences. 1) Their technique targets backdoor attacks where a trigger causes performance degradation for a specific subset of samples. They use a "shuffling vaccine" to detect suspicious samples by shuffling channels. In contrast, we reduce the impact of poison by scaling down the distillation loss at the final layer using a *diminishing factor*. We address model poisoning, which is much stronger than backdoors since it aims to indiscriminately lower the accuracy of all samples. Therefore, our intuition is that their technique might not work against untargeted model poisoning as it is explicitly designed for backdoors. 2) To mitigate performance degradation from a backdoored teacher, they use self-retrospection by synthesizing potential backdoor knowledge learned and confronting it during training. Conversely, we mitigate this by directing the distillation loss through an auxiliary classifier and scaling down the loss at the final classification layer. **Q1- Data poisoning:** While data poisoning in the context of knowledge distillation (KD) requires a dedicated study, we believe our solution is broadly applicable as it prevents the flow of *poison* through the entire model and *dilutes* its effect. Consider a simple example: label flipping, a basic data poisoning attack where some labels in the training dataset are flipped (e.g., changing a bird image's label to 'airplane' in CIFAR-10). Suppose the server model (teacher) is poisoned with this flipped dataset. During the distillation process, since we are using HYDRAFL, this poisoned distillation information would be diluted at the final layer. Instead, it will flow through an auxiliary classifier into the early layers of the model, thereby reducing the impact of the data poisoning attack via label flipping. **Q2 Removing KD loss:** Thank you for allowing us to provide clarification on this. Consider a distillation setup, where the teacher model is providing guidance to the student model. If the guidance (KD-loss) is removed from the final decision-making layer, the student lacks feedback on how its final predictions compare to the expert's. This is similar to a student receiving partial tutoring: while they may benefit initially, the absence of final guidance prevents them from fully applying the partial knowledge they have accumulated, hindering their ability to achieve the best results. By keeping the KD-loss in a reduced form, the student continues to receive crucial guidance throughout the network, ensuring better alignment and improved accuracy. The diminishing factor 'b' adjusts how much of this guidance is applied, allowing fine-tuning based on specific model needs.
Summary: This paper first empirically demonstrates the fact that KD algorithms amplify attack effectiveness. Then the authors propose HYDRA-FL as a method for mitigating the attack amplification problem, through a novel loss function template that can be applied to any FL algorithm wherein the local training objective can be adapted. Strengths: Very well written. The paper flows nicely and is well organized, and the explanations provided are very clear and thorough. The background concepts are explained well so I think this paper is accessible to a large audience (not necessarily limited to people who have background knowledge in FL). The method is reproducible because the paper explains how exactly they modified publicly-available codes for their implementation. Very extensive empirical testing, including a study that motivates the work and a few ablation studies. The results are presented in very clear figures in addition to being well-explained in the text (including numerical values, qualitative statements, and conclusory statements that summarize the takeaways from each set of results). The proposed algorithm is definitely novel. It is taking (to the best of my knowledge) a completely new approach that therefore has high potential to inspire future work. The paper also does a good job of introducing and justifying the attack amplification problem in a way that could inspire future work. Weaknesses: This does not seem like a very practical solution for real world settings since it causes a drop in benign accuracy on such simple datasets. I would think that the accuracy sacrifice could potentially be even more significant with larger, more complex models/task. It is not made clear why the accuracy drop is worth the avoidance of attack amplification. It would be helpful if there was some examples/explanation for how this approach is useful in real world settings. For instance, how can attack amplification be problematic in a real world setting and how would HYDRA-FL solve the issue? It is nice that there is some theoretical justification for the attack amplification. The paper could be even stringer if (perhaps in the appendix) you could somehow use this justification to relate to some sort of theoretical justification for the effectiveness of HYDRA-FL. Otherwise, with just the empirical results from a limited array of experimental setups, the claims of HYDRA-FL’s effectiveness are not very sound. It seems like your paper is missing some valuable information, for example, I spent some time trying to understand the threat model, but then I found it in the appendix. So it would be helpful if the main text of your paper referenced the appendix more so that the reader knows what they can go to the appendix to find. The limitations and future work are also easy to miss when just put in the appendix, so I would suggest referencing that appendix in your conclusion in the main text. The related work section isn’t really explaining how your work relates to prior work. It would be nice to have more explanation about prior solutions and what their shortcomings are and how your proposed solution overcomes those shortcomings. This is explained more in the introduction and also later in the paper (e.g. in sec. 4.1), which is helpful, but then reading the related work section it is hard to understand where your work is fitting in. So adding at least a sentence for 2.1.1 or 2.1.2, for example, could be helpful. Technical Quality: 3 Clarity: 4 Questions for Authors: Do you have any hypotheses about how HYDRA-FL would work in general across any type of KD algorithm? It would be nice if you have some justification for why your approach could work more generally and not just with FedNTD and MOON. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations are addressed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1- Real-world settings:** *Real-world use cases and task complexity:* We agree that real-world dataset analysis would be valuable. While our current choice of datasets allows for direct comparison with previous works, we plan to evaluate more complex models and tasks, such as language and multimodal models, in future studies, as discussed in Appendix A. Despite limitations, our results on CIFAR-100, which is more complex (evident by low no-attack accuracy) than MNIST and CIFAR-10, show HYDRAFL's robustness. HYDRA-FL outperforms FedAvg in both no-attack and attack scenarios and surpasses FedNTD and MOON in attack scenarios. Attack amplification can severely impact applications with diverse data in real-world scenarios such as healthcare or large-scale IoT networks. HYDRA-FL mitigates these issues by distributing the distillation process across layers (Section 4.1), which prevents attacks from focusing on a single layer. *Justification on attack amplification avoidance:* We acknowledge the need to explain how our approach avoids attack amplification and will address this in the final version. Tables 1\&2 (page 8) show a slight decrease in benign accuracy for HYDRA-FL compared to FedNTD/MOON in some cases. However, HYDRA-FL generally achieves higher benign accuracy. More importantly, HYDRA-FL demonstrates significantly higher accuracy in the presence of attacks than FedNTD/MOON, resulting in a substantial overall performance improvement. We compute accuracy drops (no-attack minus attack accuracies) for FedNTD/MOON vs HYDRAFL. The following tables consistently show that HYDRA-FL experiences a smaller accuracy drop, highlighting its effectiveness. | | FedNTD | HYDRAFL | |-------------|--------|---------| | MNIST | 34.94 | 16.02 | | CIFAR10 0.05| 25.22 | 21.77 | | CIFAR10 0.1 | 24.34 | 22.87 | | CIFAR10 0.5 | 19.28 | 18.65 | | CIFAR100 | 15.18 | 14.57 | | | MOON | HYDRAFL | |------------|-------|---------| | MNIST | 18.81 | 15.39 | | CIFAR10 0.1| 18.9 | 16.5 | | CIFAR10 0.5| 6.17 | 3.39 | | CIFAR10 5 | 3.95 | 2.15 | | CIFAR100 | 5.53 | 4.3 | Thus, avoiding attack amplification is worth the *slight* drop in benign accuracy. In the real world, enhanced security against attacks justifies this tradeoff, particularly in critical applications, such as autonomous driving, healthcare applications, or financial fraud detection. **W2- Theoretical justification:** The reviewer has raised an important point about the theoretical justification for HYDRA-FL. The generic loss function of an FL client using KD is: $\mathcal{L}$ = $\mathcal{L}\_{CE}(\hat{y}\_{c}, y) + \beta \mathcal{L}\_{KD}({y}\_{c}, y\_{s})$ In Section 3, we showed that when malicious clients are present, $\frac{d\mathcal{L}}{d\beta} > 0$, i.e., KD causes the loss to increase. In Section 4, we formulated the generic loss function of an FL client using KD in HYDRA-FL as: $\mathcal{L'} = \mathcal{L}\_{CE}(y\_{c}, y) + \frac{\beta}{b} \mathcal{L}\_{KD}(y\_{c}, y\_{s})+ \gamma \mathcal{L}\_{KD}(y\_{aux}, y\_{s})$ We use $\mathcal{L'}$ for HYDRA-FL loss and $\mathcal{L}$ for basic KD loss. Comparing them, since $\frac{\beta}{b} < \beta$, it follows that $\frac{\beta}{b} \mathcal{L}\_{KD}(y\_{c}, y\_{s}) < \beta \mathcal{L}\_{KD}({y}\_{c}, y\_{s})$. The entire model is $\theta$, and the part is up til the auxiliary layer $\theta'$. The impact of shallow distillation loss is only on $\theta'$ and not the rest of the model $\theta - \theta'$. Therefore, with appropriate $b$ and $\gamma$, $\frac{d\mathcal{L'}}{d\beta} < \frac{d\mathcal{L}}{d\beta}$. This is shown in Figure 7, where we select $b=[1,4]$ (so $\beta$ effectively becomes $1$ and $\frac{1}{4}$, respectively) and $\gamma=2$ to show the effectiveness of our technique and the impact of the variation of these hyperparameters. **W3- Threat model, limitations, and future works referencing:** Thank you for pointing this out. We will update our paper and refer to the threat model (Appendix C.1) and attacks+defenses used (Appendix C.2) in the Introduction (Section 1) and Experimental Settings (Section 5.1). **W4- Where does HYDRA-FL fit in related work?** We recognize the need to clarify how our work fits within this field. We will update Section 2.1.1. Unlike MOON and FedNTD, which address data heterogeneity only in no-attack settings, HYDRAFL operates in both no-attack and attack settings, achieving robustness and accuracy under high heterogeneity. Section 2.1.2 will explain that our paper shows how KD-based data-heterogeneity mitigation techniques amplify poisoning attacks and that HYDRAFL addresses both data heterogeneity and poisoning. **Q1- How would HYDRA-FL work with any other KD algorithm?** The reviewer has raised an important point about HYDRAFL's generalizability. We will incorporate this explanation of HYDRAFL's application to any KD FL algorithm in the appendix of our final version. KD-based FL techniques align local and server models to reduce deviation caused by data heterogeneity. As stated in Section 3, MOON uses model-contrastive learning for alignment, while FedNTD uses KL-divergence. Any KD-based FL algorithm achieves this alignment, akin to fine-tuning an engine, with the following equation: $ \mathcal{L} = \mathcal{L}\_{CE}(y\_{c}, y) + \phi \mathcal{L}\_{alignment}(y\_{c}, y\_{s})$ Here, $\mathcal{L}\_{alignment}$ represents the alignment technique. $\phi$ represents the strength of this alignment. In this generic setting, HYDRAFL modifies the above equation by diminishing $\phi$ with '$b$'. We add a new loss term that performs this alignment at a shallow layer with an auxiliary classifier. HYDRAFL's equation becomes: $\mathcal{L} = \mathcal{L}\_{CE}(y\_{c}, y) + \frac{\phi}{b} \mathcal{L}\_{alignment}(y\_{c}, y\_{s})+ \eta \mathcal{L}\_{alignment}(y\_{aux}, y\_{s})$ This enables HYDRA-FL to adjust the alignment process for any KD algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. I have updated my rating of this paper due to my comments/concerns being well addressed.
Summary: This work addresses the challenge of data heterogeneity in Federated Learning (FL) and its impact on global model performance. The authors demonstrate why KD is susceptible to the issue of poisoning attacks and use these findings as a foundation to propose a novel method HYDRA-FL. Experimental results demonstrate that HYDRA-FL not only enhances resistance to attacks but also maintains its performance in benign settings. Strengths: - The authors have studied why Knowledge Distillation (KD) amplifies model poisoning and propose their method based on these findings. - The authors implemented ablation experiments to prove the effectiveness of the proposed method. - The authors will release the codes. Weaknesses: - The performance advantage is sometimes unstable, such as when alpha=5 in Table 2. The authors could provide a more detailed analysis to explain this variability. - The authors should explain how their findings regarding the amplification of attacks through knowledge distillation are applicable to both standard knowledge distillation learning and KD-based FL, emphasizing the differences and similarities in these scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses mentioned above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the work in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Unstable performance advantage:** We thank the reviewer for giving us the opportunity to provide further explanation of this variability. We start off by saying that we see stronger accuracy gains in high-heterogeneity settings (low alpha), as we show in Tables 1 and 2 (Page 8). The reason behind this relationship between heterogeneity and accuracy gain is that, at high heterogeneity, there is more ``room'' for gain. We explain it with a simple analogy. Think of a high heterogeneity setting as a car engine that is out of tune. Therefore, there is an opportunity to make significant improvements by tuning it. Conversely, a low-heterogeneity setting is like an already-tuned or slightly out-of-tune engine where corrective adjustments lead to smaller gains. This analogy highlights the idea that there is more disparity or "distance" from the optimal global model in high heterogeneity settings, allowing methods like FedNTD or MOON to achieve more noticeable improvements. In contrast, the potential for noticeable improvements is reduced when the system is already closely aligned with the global model (low heterogeneity). Applying FedNTD or MOON brings them closer to the global model, and if we think of these updates as vectors (as shown in our Figure 1), then the ``angle'' of improvement will be high. On the other hand, in a low heterogeneity setting, where model updates are already close to the global model, there is little improvement in the final accuracy when we apply FedNTD or MOON, i.e., the angle of improvement is low. We can also see this numerically in Tables 1\&2 in Sections 5.1\&5.2. To clarify our point, we calculate this accuracy gain for each heterogeneity level for CIFAR10 and show it in the following two tables. These tables show that the accuracy gain is stronger at low heterogeneity. *HYDRAFL in the no-attack setting (Table 1, Page 8)* | Heterogeneity | Fedavg | HYDRA-FL | Gain | |---------------|--------|----------|------| | 0.05 | 44.69 | 46.92 | 2.23 | | 0.1 | 54.67 | 57.12 | 2.45 | | 0.3 | 66.34 | 68.1 | 1.76 | | 0.5 | 70.57 | 71.22 | 0.65 | *HYDRAFL in the attack setting (Table 1, Page 8)* | Heterogeneity | FedNTD| HYDRA-FL | Gain | |---------------|--------|----------|------| | 0.05 | 21.72 | 25.15 |3.43 | | 0.1 | 32.61 | 34.25 |1.64 | | 0.3 | 46.72| 47.03 | 0.31 | | 0.5 | 52.51 | 52.57 |0.06 | *HYDRAFL in the no-attack setting (Table 2, Page 8)* | Heterogeneity | Fedavg| HYDRA-FL | Gain | |---------------|-------|----------|------| | 0.1 | 57.76 | 60.1 | 2.34 | | 0.5 | 63.14 | 63.32 | 0.18 | | 5 | 71.19 | 70.55 | -0.64 | *HYDRAFL in the attack setting (Table 2, Page 8)* | Heterogeneity | MOON| HYDRA-FL | Gain | |---------------|-------|----------|------| | 0.1 |39.9 |43.6 |3.7 | | 0.5 |57.17 |59.93 |2.76 | | 5 |67 |68.4 | 1.4| In addition, we want to highlight that in the attack setting, HYDRA-FL can outperform both FedAvg and MOON at $\alpha = 5$. If we look closely, the accuracy drop (i.e., the difference between no-attack and attack accuracies) for FedAvg is 2.81, 3.95 for MOON, and 2.15 for HYDRA-FL. Thus, we achieve the lowest accuracy drop in this setting. **W2: Comparison with standard KD:** The reviewer has raised a very important point about the similarities and dissimilarities between standard KD and FL KD regarding attack amplification. We provide some insights into this matter and plan to incorporate this comparison in the final version of our paper. We will add this in the appendix with a dedicated section titled ``Comparison with Standard KD''. **Dissimilarities:** We elaborate on the dissimilarities first since they relate to the practicality of the attack amplification phenomenon. The threat model differs when we compare standard KD to KD in FL. In standard KD, there is only one student and one teacher. An attacker might poison the teacher with a backdoor trigger that causes misclassification on specific input samples. In the FL case, the teacher (server) model itself is not directly attacked. Instead, some of the client models are attacked, which indirectly poisons the global model because it is obtained by aggregating all the client models. Therefore, in the FL case, the attacker is poisoning some of the student models, which in turn poison the server model upon aggregation, and then propagates the poison to the non-malicious/benign clients when they use KD in their local training step. **Similarities:** We believe that our mitigation technique, originally designed for FL, can also be effectively applied in non-FL settings. The premise of our technique, HYDRA-FL, is to reduce the distillation at the final layer and relegate some of it to a lower layer, thereby reducing the impact of distillation. To illustrate, imagine the poisoned distillation loss at the final layer as noise being injected into the model. By adjusting the distillation process (Section 4.1), this noise is relegated to lower layers through an auxiliary classifier, thereby mitigating the impact of such attacks. This *dilution* of adversarial effects can enhance robustness in both FL and non-FL settings. This is an interesting comment by the reviewer. It can be a potential future direction to rigorously experiment to find similarities and dissimilarities between KD in non-FL and FL settings.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your detailed, insightful reviews and helpful comments, with scores ranging from 5 to 7. Our efforts to address your questions and comments have significantly improved the paper. We are particularly grateful for the explicit questions on how our work will benefit the FL community compared to the others and the feasibility of our solution in real-world scenarios. We also realize there are concerns regarding the extent of our evaluation. While we acknowledge these concerns, we have added clarifying results to our rebuttal. Conducting an exhaustive evaluation was beyond the scope of a single paper, but we have addressed these weaknesses and limitations and plan to incorporate these insights into the updated version of our paper. We have provided detailed answers to all the questions, including as much detail as possible within the text limits and comments of each reviewer in their respective rebuttal sections. Below is a summary of the major questions about our paper and our responses: - **Similarities and dissimilarities with other works:** The similarities and dissimilarities with related works [1,2] (pointed out by Reviewer SYjG) were not discussed in the main paper. We address this issue by presenting a comparison of our work with related works [1,2]. We also discuss how our technique compares to the non-FL setting or with other forms of attacks, such as data poisoning attacks, and provide some explanation as to how our technique will perform in these settings. We will incorporate these detailed discussions in the updated version of our paper. - **Variation in performance improvement:** The reviewers observed that performance improvement varies with the level of heterogeneity. In our rebuttal, we address this issue and explain why we get higher performance gains at higher heterogeneity and vice versa. We will include these explanations in the updated version of our paper. - **Breadth of evaluation:** A major concern was that we have evaluated over two KD methods. We agree with the reviewers' concerns about evaluation. Therefore, we have explained that while we have not expanded on breadth by including more KD algorithms, *we have expanded on depth by performing detailed analysis on our two chosen techniques, MOON and FedNTD*. We highlight the importance of our experimental setup combinations and their key takeaways. We will update the paper and highlight the experimental observations there as well. - **Additional theoretical analysis:** Some additional theoretical analysis was asked by Reviewer udjQ. While our main paper had the theoretical justification for the problem of attack amplification, we did not explicitly go into the theoretical details of the effectiveness of HYDRAFL. In addition to this, a step-by-step procedure for fitting HYDRAFL on any KD-FL algorithm was not present. To rectify these issues, we have provided the theoretical explanation behind the working of HYDRAFL and provided the steps to fit HYDRAFL on any KD-FL algorithm. - **Practicality of our experimental setup:** There were practical concerns, such as those related to sampling ratios and choice of datasets. We agree with the reviewers about these issues of practicality. However, we justify that we have used the original MOON's and FedNTD's settings to ensure a fair comparison for the hyperparameters, which the original papers claim are the best-performing ranges for them. - **Additional experiments:** To make a case for our technique's robustness, we performed additional experiments for this rebuttal. Considering time constraints and resource availability, we could not do an exhaustive set of experiments across all setup combinations. Therefore, we show results for MNIST at a very high level of heterogeneity (0.05). In this setting, we also show both the problem of attack amplification and the performance gains of HYDRAFL. - **Referencing in the main paper:** Limitations, Future work, and threat model details were part of the appendix and not adequately referenced in the main paper. We will update the main paper and refer to the relevant sections appropriately. Again, we thank the reviewers for their feedback. We believe all comments above can be addressed through text revisions without requiring substantial experiments. We will enhance the clarity and presentation of the paper. We appreciate the reviewers' recognition and would be grateful for the opportunity to update the paper for acceptance at NeurIPS'24. References: [1] Lin, Tao, et al. "Ensemble distillation for robust model fusion in federated learning." Advances in neural information processing systems 33 (2020): 2351-2363. [2] Hong, Junyuan, et al. "Revisiting data-free knowledge distillation with poisoned teachers." International Conference on Machine Learning. PMLR, 2023.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ROIDICE: Offline Return on Investment Maximization for Efficient Decision Making
Accept (poster)
Summary: The article proposes a novel offline algorithm for learning policies optimising Return on Investment (ROI): the ratio between the (discounted) cumulated rewards obtained by a policy, and the (discounted) cumulated costs of the actions taken. The paper focuses specifically on offline RL, and in particular builds on formalising the RL problem in a Linear Programming form. The proposed approach builds on the recent literature, particularly DICE-RL, but derives a new regularisation term for ROI optimisation. The experimental evaluation demonstrates that the proposed approach successfully learns policies with high ROI on various offline RL datasets. Strengths: - The idea of optimising ROI is, to my knowledge, novel, and interesting as it proposes a new way to balance the trade-off between action costs and policy rewards, which is relevant in a wide range of applications of RL. - Although the RL literature has proposed ways to handle this trade-offs since its early days (for example, by modelling costs as negative rewards), there is an argument that an ROI objective could have some advantages. - The derivation of the algorithm is well described in the article and is a substantive contribution with respect to the state-of-the-art Weaknesses: - Although optimising ROI could indeed provide a better framework for cost/returns tradeoffs, the article does not provide a substantive argument of why this would be the case. This somewhat reduces the expected impact and relevance of the contribution. - Similarly, the experimental evaluation only compares the proposed approach to RL trained to optimise the return solely, with no constraint for cost. In this setting, it seems self-evident that the proposed method would achieve better ROI and lower costs than a policy trained ignoring cost altogether. It is fairly common in RL to approach this tradeoff by integrating the action costs as negative rewards, and it would have been fairer to compare with such an approach as an additional baseline. - One reason for the broad use of ROI in business, is that the quantities of "return" and "investment" are usually unambiguously in the same quantity making the ratio calculation natural (although different costs can affect the ratio). This is less straightforward in many RL problems where the return and possible costs may be in very different dimensions and distributions. It is unclear from the paper how stable are the policies optimised depending on reward/cost scaling. - There is an issue with a number of references in the bibliography that appear to be missing some fields (eg, refs 6,7,9,10,11)Could you provide a better rationale on why ROI would be a good objective for RL in general, beyond its use in the business domain? - The difference in return between standard RL and the ROI version is quite large in the experiments. How significant is the decrease in policy performance compared to cost-free policies in practice for - Does the proposed approach provide some guarantees on cost, compared to, eg, CoptiDICE? Technical Quality: 3 Clarity: 3 Questions for Authors: Could you provide a better rationale on why ROI would be a good objective for RL in general, beyond its use in the business domain? - The difference in return between standard RL and the ROI version is quite large in the experiments. How significant is the decrease in policy performance compared to cost-free policies in practice for - Does the proposed approach provide some guarantees on cost, compared to, eg, CoptiDICE? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - No significant ethical aspects to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Yyxa, We appreciate your feedback and have provided responses to your questions below. **1. Could you provide a substantive argument of why optimizing ROI could offer a better framework for cost/returns trade-offs?** Constrained reinforcement learning is a framework that aims to maximize the return within a given cost budget, rather than optimizing the trade-off between returns and costs. In other words, constrained RL agent does not try to minimize the accumulated cost if it is below the budget. However, ROI maximization focuses on maximizing the return/cost ratio, meaning the accumulated cost can be minimized further to achieve a higher ROI. If our main goal is to maximize ROI, ROIDICE is much more efficient compared to utilizing constrained RL algorithms. As described in Section 5.1, for ROI maximization, constrained RL requires multiple runs with different levels of cost budgets, and the policy with the best ROI performance can be selected afterward. However, ROIDICE requires only a single run to obtain the policy with the best ROI performance, eliminating the need to search among cost budget candidates. **2. It is fairly common in RL to approach cost/return trade-off by integrating the action costs as negative rewards.** In our paper, ROIDICE has been compared with offline RL and offline constrained RL. As the reviewer has pointed out, the return/cost trade-off can be optimized with a regularized RL framework with its reward $r(s,a)-\lambda c(s,a)$. However, we believe that constrained RL is a better baseline for ROIDICE when considering practical usages. Theoretically, there always exists a regularized RL with a certain $\lambda$ that provides the same solution as constrained RL for a given cost budget. In some constrained RL algorithms using Lagrangian, $\lambda$ is implicitly optimized to satisfy the cost constraint[1]. However, finding an appropriate $\lambda$ to compare regularized RL methods with ROIDICE can be difficult, particularly when the scale of reward and cost varies significantly. In contrast, constrained RL is more practical, as a suitable cost budget can be inferred from the dataset, alleviating the need for an explicit search for $\lambda$. **3. How stable are the policies optimized in problems where the returns and possible costs may have very different dimensions and distributions?** As mentioned in the Limitations section, ROIDICE can be influenced by the design of reward and cost functions. The variation of reward and cost across state-action pairs significantly impacts ROIDICE as it maximizes the ratio of return to accumulated cost. For instance, as noted in Section 3, when the cost is constant across all states and actions, the ROIDICE policy becomes equivalent to the standard offline RL policy. Conversely, as the distributions of cost become more diverse, the ROIDICE policy deviates from the offline RL policy. On the other hand, analogous to typical RL algorithms not being affected by the scales of reward function, ROIDICE is basically invariant to different scales of rewards and costs apart from the choice of appropriate $\alpha$. We also empirically observed that ROIDICE resulted in a same level of performance when $\alpha$ is appropriately scaled to align with the cost scales. **4. Could you provide a better rationale on why ROI would be a good objective for RL in general, beyond its use in the business domain?** We can come up with a number of practical scenarios where ROI would be a good objective, such as optimizing the mileage (km/L) of an autonomous vehicle, the (decision optimality/planning time) trade-off when adopting a meta controller for real-time decision making, or the speed-accuracy tradeoff (generation quality/inference time) of a large language model (LLM). We illustrate a comparison between two approaches for optimizing the mileage problem. - Constrained RL: An agent trained with constrained RL uses a limited budget to maximize the reward. Setting an appropriate budget that is feasible for the problem may require domain knowledge. Therefore, to compare the ROI performance of its policy, it is necessary to conduct multiple runs of constrained RL with different levels of constraints (e.g., fuel consumption $\leq$ [5, 10, 15, 20]). The policy with the best mileage is then selected from these multiple runs of constrained RL. - ROI maximization: A single run of ROIDICE is sufficient to find a policy that maximizes the vehicle's mileage. Ideally, the mileage (ROI) performance of the ROIDICE policy would be equal to or greater than that of the constrained RL policy. Equality occurs when the upper bound of the constraint matches the fuel consumption of the ROIDICE policy. **5. How significant is the decrease in policy performance compared to cost-free policies in practice for?** The remark in Section 3 notes that the optimal policies of standard RL and ROI maximization are equivalent when the cost is constant across all states and actions. This gap increases when the cost varies among states and actions or when there is an action with high cost that leads to high return. The difference between standard RL and ROIDICE can be managed by adjusting the cost distribution. **6. Does the proposed approach provide some guarantees on cost?** Current form of ROIDICE does not guarantee the upper bound on cost. However, ROIDICE with the cost guarantee can be easily derived by adding the cost constraint $\sum_{s,a}d(s,a)c(s,a)\leq C_{\text{threshold}}$ to the LP formulation of our algorithm, ROI-LP. Thanks for pointing out the issue with references. We will fix this in the next revision. [1] J. Lee, et al. COptiDICE: Offline constrained reinforcement learning via stationary distribution correction estimation. ICLR, 2022. --- Rebuttal Comment 1.1: Title: Good rebuttal Comment: Thank you for the detailed and thoughtful response to my comments. After reading the response (and other reviewers' comments), I would like to raise my rating to 6. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Yyxa, We're pleased to know that your concerns have been resolved, and we truly appreciate your positive feedback! However, it seems the system is still displaying the initial review score. Could you kindly verify if your updated score has been correctly applied? Thank you so much. --- Rebuttal 2: Title: Rebuttal discussion Comment: Dear Reviewer Yyxa, Could you please take a look at the rebuttal and share your thoughts on the authors' reply? The author has provided a detailed response to the initial comments - does it address your major concerns? Thank you very much for your time and effort! Best, Your AC
Summary: This paper introduces a novel approach for solving constrained offline reinforcement learning problems. The authors apply the Charnes-Cooper transformation to convert the linear-fractional programming into an equivalent linear programming problem, and draw inspiration from the DICE framework to maximize ROI under offline setting. Experimental results across finite and continuous domains demonstrate that the proposed ROIDICE achieves a better trade-off between return and accumulated cost. Strengths: * The paper's approach is mathematically sound, and the motivation is well elucidated. * The experimental results conducted in various environments demonstrate its superiority over the baseline. Weaknesses: * Important baselines are missing: apart from DICE-based methods, comparisons should include other offline constrained RL methods. * How is the proposed method implemented in continuous domains? The description lacks clarity; for instance, providing pseudocode would be helpful. * In the FinRL task, it's intriguing why the unconstrained OptiDICE method performed well while other constrained baselines showed poor performance. Technical Quality: 3 Clarity: 3 Questions for Authors: - It is suggested to add the discussion of related works and compare them in experiments, such as [1,2]. - Can you explain why using constraint methods in the FinRL environment actually yields poor results? - Can you provide experimental results on widely recognized safe RL environments such as SafetyGymnasium[3,4] and BulletSafetyGym[5]? [1] Xu H, Zhan X, Zhu X. Constraints penalized q-learning for safe offline reinforcement learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(8): 8753-8760. [2] Guan J, Chen G, Ji J, et al. VOCE: Variational optimization with conservative estimation for offline safe reinforcement learning[J]. Advances in Neural Information Processing Systems, 2024, 36. [3] Ray A, Achiam J, Amodei D. Benchmarking safe exploration in deep reinforcement learning[J]. arXiv preprint arXiv:1910.01708, 2019, 7(1): 2. [4] Ji J, Zhou J, Zhang B, et al. Omnisafe: An infrastructure for accelerating safe reinforcement learning research[J]. arXiv preprint arXiv:2305.09304, 2023. [5] Gronauer S. Bullet-safety-gym: A framework for constrained reinforcement learning[J]. 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As mentioned in the paper, the ultimate performance of ROIDICE is contingent upon the dataset and exhibits sensitivity, potentially resulting in behaviors such as minimizing costs excessively. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer T1wG, We thank the reviewer for the detailed feedback. We address your remarks below. **1. Important baselines are missing: apart from DICE-based methods, comparisons should include other offline constrained RL methods.** Our experiment in continuous domains from Section 5.2 includes non-DICE offline constrained algorithm, CDT[1]. We additionally provide experiments on other offline constrained RL frameworks, VOCE[2] and CPQ[3]. Table 1 in the rebuttal supplementary presents the ROI performance of these constrained offline RL algorithms in the Hopper environment[4]. These algorithms use two cost budgets, corresponding to the 50th and 80th percentiles of the accumulated costs from the offline dataset. Over various experiments, constrained offline RL algorithms commonly have shown low ROIs. We conjecture that these results are mainly due to the overestimation of costs, making constrained offline RL algorithms to be overly conservative on choosing costly state-actions. On the other hand, ROIDICE seems to be less affected by such overestimation, as overestimated costs are partially mitigated with overestimated rewards when estimating ROIs. **2. How is the proposed method implemented in continuous domains?** We provide the pseudocode of ROIDICE in the rebuttal supplementary. Lagrangian multipliers are parameterized and updated to optimize ROIDICE objectives presented in our paper. Optimal policy $\pi^*_\theta$ is extracted from the optimal ratio $w^{*}_{\nu,\mu,t}$ obtained from the Lagrangian multipliers. **3. In the FinRL task, why the unconstrained RL performed better than constrained RL and what makes the constrained RL methods perform poorly?** As reported in the appendix, in FinRL task, constrained RL algorithms are trying to achieve lower cost compared to unconstrained RL algorithms by sacrificing large amount of its return. While both constrained and unconstrained RL agents are resulting in a policy with smaller cost compared to the cost budget, we assume that constrained RL algorithms are largely overestimating the cost, resulting in a overly conservative policies. **4. Can you provide experimental results on widely recognized safe RL environments?** We evaluate our approach on OpenAI SafetyGym[5] tasks including CarGoal and PointPush. Table 2 in the rebuttal supplementary materials provides results averaged over 5 seeds across 10 episodes. Similar to the reported results in the paper, we observed that ROIDICE outperforms COptiDICE in terms of ROI. We use the rewards and costs from the environment, adding a constant value of $\epsilon=0.1$ to each cost to maintain our assumption that $c(s,a) > 0 \ \forall s,a$. The offline dataset is collected by PPO Lagrangian. ROIDICE uses $\alpha=0.001$ for CarGoal and $\alpha=0.01$ for PointPush. COptiDICE[6] uses $\alpha=0.01$ for CarGoal and $\alpha=1.0$ for PointPush. We will include the results in the final version of the paper if accepted. [1] Z. Liu, et al. Constrained decision transformer for offline safe reinforcement learning. ICML, 2023. [2] J. Guan et al. VOCE: Variational optimization with conservative estimation for offline safe reinforcement learning. NeurIPS, 2024. [3] H. Xu, et al. Constraints penalized q-learning for safe offline reinforcement learning. AAAI, 2022. [4] J. Fu, et al. D4rl: Datasets for deep data-driven reinforcement learning, 2020. [5] A. Ray, et al. Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708, 2019. [6] J. Lee, et al. COptiDICE: Offline constrained reinforcement learning via stationary distribution correction estimation. ICLR, 2022. --- Rebuttal Comment 1.1: Title: Rebuttal discussion Comment: Dear Reviewer T1wG, Could you please take a look at the rebuttal and share your thoughts on the authors' reply? The author has provided additional comparisons to baselines, explanations, and more experimental results. Does the rebuttal address your initial concerns? Thank you very much for your time and effort! Best, Your AC --- Rebuttal Comment 1.2: Title: Good rebuttal that addressed most of my concerns Comment: I thank the authors for providing detailed responses to my comments. Reading the rebuttal materials solved most of my concerns. I also read the comments from other reviewers and still rate this work positively. Good luck to the authors for the final decision of this work. Best wishes,
Summary: This paper provides a fraction linear programming framework for solving offline RL problems for return on investment (ROI). The fraction linear programming can be trasformed to a linear programming and a convex regularizer is used to control the distribution mismatch. The authors provide adequate experimental results to demonstrate the advantages of the algorithm. Strengths: The framework is clean which uses linear programming to model the offline ROI maximization problems and the authors provide sufficient experimental results with good performance. Weaknesses: No sample complexity guarantee. See questions for details. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors provide theoretical analysis for the sample complexity like the paper ``Ozdaglar et al Revisitting the linear programming framework for offline reinforcement learning'' which also uses LP framework to solve offline RL? 2. How can we implement this LP using function approximation when the state-action space is large? 3. Missing references: There are papers using LP frameworks to solve offline RL with sample complexity guarantee, such as the two paper mentioned before. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 8zrU, We appreciate your comprehensive comments. Please find the response to your questions below. **1. Theoretical analysis for the sample complexity in ROI-LP.** Sample complexity analyses in the LP formulation of RL traditionally estimate a policy's return using a linear combination of its stationary distribution and reward. On the other hand, ROI is defined as a ratio between the linear combinations of the stationary distribution with reward and cost, making existing approaches not directly applicable. To illustrate this, we demonstrate that the method proposed in [1] cannot be directly applied to determine the sample complexity of the ROI-LP. Specifically, Lemma 3 in [1] cannot be straightforwardly derived from the ROI-LP. The lemma assumes an approximated stationary distribution ${\theta}\in \mathbb{R}^{|S||A|}$ and policy $\pi_{\theta}$ extracted from $\theta$. $d_{\pi_{\theta}}\in \mathbb{R}^{|S||A|}$ is the true stationary distribution generated by policy $\pi_{\theta}$. Lemma 3 provides an error bound that relates the Bellman flow constraint violation of $\theta$ and the absolute difference between $\sum_{s,a}\theta(s,a)r(s,a)$ and $\sum_{s,a}d_{\pi_\theta}(s,a)r(s,a)$. **Lemma 3** For any $\theta\geq0$ and $r(s,a)\in[0,1]$, we have $|\sum_{s,a}r(s,a)(\theta(s,a)-d_{\pi_{\theta}}(s,a))|\leq{||M\theta -(1-\gamma)p_{0}||_{1} \over 1-\gamma}$, where $M=B_{\*}- \gamma T_{\*}$. We show that a straightforward extension of the lemma is not feasible in the ROI-LP context. The pair $(\tilde{\theta},\tilde{t})$ serves as an approximation of $(d^{'},t)$ in ROI-LP, and the policy $\pi_{\tilde{\theta}}$ is extracted from $\tilde{\theta}/\tilde{t}$. The actual stationary distribution produced by $\pi_{\tilde{\theta}}$ is $d_{\pi_{\tilde{\theta}}}^{'} / t_{\pi_{\tilde{\theta}}}$. We begin by assessing the extent to which $(\tilde{\theta},\tilde{t})$ violates the scaled Bellman flow constraint in ROI-LP. $\Vert M\tilde{\theta}-(1-\gamma)p_{0}\tilde{t}\Vert_{1} = \Vert M\tilde{\theta}-M\tilde{t}d_{\pi_{\tilde{\theta}}}^{'} / t_{\pi_{\tilde{\theta}}}\Vert_{1}\nonumber\geq(1-\gamma)\Vert B_{\*}\tilde{\theta}-B_{\*}d_{\pi_{\tilde{\theta}}}^{'} \tilde{t}/t_{\pi_{\tilde{\theta}}}\Vert_{1}$ where a valid stationary distribution $d_{\pi_{\tilde{\theta}}}^{'} / t_{\pi_{\tilde{\theta}}}$ satisfies $(1-\gamma)p_0 = Md_{\pi_{\tilde{\theta}}}^{'} / t_{\pi_{\tilde{\theta}}}$. The ROI difference between $\sum_{s,a}\tilde{\theta}(s,a)r(s,a)$ and $\sum_{s,a}d_{\pi_{\tilde{\theta}}}^{'}(s,a)r(s,a)$ can be bounded by, $|\sum_{s,a}r(s,a)(\tilde{\theta}(s,a) - d_{\pi_{\tilde{\theta}}}^{'}(s,a))|\leq\sum_{s}|\sum_a \tilde{\theta}(s,a) - d_{\pi_{\tilde{\theta}}}^{'}(s,a)|=\Vert B_{\*}\tilde{\theta}-B_{\*}d_{\pi_{\tilde{\theta}}}^{'}\Vert_{1}$ It can be noted that, due to the additional factor $\tilde{t} / t_{\pi_{\tilde{\theta}}}$ that cannot be easily bounded under finite samples, ROI-LP cannot use the same proof technique to [1] for sample complexity analysis. While the sample complexity analysis is indeed an important research topic, we believe that conducting a new type of sample complexity analysis appropriate for the Linear-Fractional Programming framework is beyond of the scope of this paper. [1] Ozdaglar, Asuman E., et al. "Revisiting the linear-programming framework for offline rl with general function approximation." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Title: Rebuttal discussion Comment: Dear Reviewer 8zrU, Could you please take a look at the rebuttal and share your thoughts on the authors' reply? The author has provided additional theoretical analysis and clarification in response to Q1. Does the rebuttal sway your opinions? Thank you very much for your time and effort! Best, Your AC --- Rebuttal Comment 1.2: Comment: I thank the authors for the response. I will increase the score to 6. But I still think for offline RL papers, the sample complexity and function approximation setting to deal with large state action space are important.
Summary: The paper addresses the problem of maximizing the Return on Investment (ROI) in the context of offline reinforcement learning (RL). The method introduced is ROIDICE, which stands for Return on Investment Decision-making in the Offline Setting. ROIDICE is an offline policy optimization algorithm designed to optimize the ROI of a policy using a fixed dataset of pre-collected experiences. The method involves formulating the ROI maximization problem as a linear-fractional programming problem, which is then converted into an equivalent linear programming problem using the Charnes-Cooper transformation. This transformation allows the problem to be solved using standard linear programming techniques. To address the distribution shift inherent in offline learning, ROIDICE incorporates a convex regularization that measures the amount of distribution shift while maintaining the convexity of the problem. This regularization is designed to penalize the trained policy's deviation from the behavior policy used to collect the offline dataset. The experiments conducted in the paper demonstrate that ROIDICE outperforms other offline RL algorithms, including those that focus solely on maximizing the return of the policy. ROIDICE achieves a superior trade-off between return and accumulated cost, resulting in a more efficient policy. Strengths: 1. The paper introduces a novel policy optimization framework that maximizes the Return on Investment (ROI) of a policy. This is a significant departure from traditional approaches that focus solely on maximizing return, without considering the cost associated with the actions. 2. The proposed method operates within an offline setting, where direct interaction with the environment is not possible. This is particularly useful in scenarios where online interaction is costly or risky. 3. The authors derive an offline algorithm, ROIDICE, which optimizes the ROI of a policy using a fixed dataset of pre-collected experiences. 4. The paper demonstrates that ROIDICE yields policies with better efficiency than policies from existing RL-based optimization methods. This is achieved by considering the trade-off between return and accumulated cost. The authors conduct extensive experiments across various domains, including locomotion tasks and a financial stock trading task. T Weaknesses: ROIDICE may require significant computational resources, especially when dealing with large-scale datasets and complex environments. Could you provide details on the computational resources and time consumption for different methods? If the data distribution in the offline dataset significantly differs from that encountered in the online environment, the performance of ROIDICE may be affected. How can this issue be addressed? Technical Quality: 2 Clarity: 3 Questions for Authors: See Weakness Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Z94y, We thank the reviewer for the thorough and constructive comments. We hope we can address your concerns below. **1. Could you provide details on the computational resources and time consumption for different methods** We provide details on the resources and runtime to demonstrate that ROIDICE does not require a significant amount of resources for large-scale domains, such as locomotion and finance. All algorithms, including baseline methods, were trained for 100K iterations on a single NVIDIA RTX 4090 GPU. ||ROIDICE (Ours)|OptiDICE[1]|COptiDICE[2]|CDT[3]| |---|---|---|---|---| |Run time (wall-clock) locomotion|10 min.|8 min.|35 min.|150 min.| |Run time (wall_clock) finance|20 min.|16 min.|120 min.|250 min.| |# of parameters|140K|140K|357K|730K| **2. What if the data distribution in the offline dataset significantly differs from that encountered in the online environment?** In the context of offline reinforcement learning (RL), a major challenge arises from the degradation in performance due to increasing discrepancies between the offline data distribution and the online environment. To mitigate this issue, DICE-based RL frameworks—such as ROIDICE, OptiDICE, and COptiDICE—employ f-divergence to regulate the extent of conservativeness, which is modulated by the hyperparameter $\alpha$. As illustrated in Figure 5 of Appendix D, which corresponds to Figure 1 in the supplementary materials, selecting an appropriate $\alpha$ can identify an offline RL policy that is less affected by distributional shift. [1] J. Lee, et al. Optidice: Offline policy optimization via stationary distribution correction estimation. ICML, 2021. [2] J. Lee, et al. COptiDICE: Offline constrained reinforcement learning via stationary distribution correction estimation. ICLR, 2022. [3] Z. Liu, et al. Constrained decision transformer for offline safe reinforcement learning. ICML, 2023. --- Rebuttal Comment 1.1: Title: Rebuttal discussion Comment: Dear Reviewer Z94y, Could you please take a look at the rebuttal and share your thoughts on the authors' reply? The author provided additional results on the computational cost and discussion on the online setting. Do they address your concerns? Thank you very much for your time and effort! Best, Your AC
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their time and effort in providing constructive feedback and insightful reviews of our paper. We are grateful that the reviewers recognized our paper for presenting a novel offline policy optimization framework that effectively optimizes the trade-off between return and accumulated cost, is mathematically well-derived, and yields policies with better efficiency across various domains. In response to the valuable feedback from the reviewers, we address each concern individually. Below, we summarize the major points of the feedback: - How to address the discrepancy between the offline data distribution and the online environment. - Whether a theoretical analysis of sample complexity is available for ROI-LP. - Comparative analysis of ROI performance across policy optimization approaches - Clarifying the advantages of ROIDICE for return/cost trade-off optimization - Exploring potential applications of ROIDICE beyond business domains. Pdf: /pdf/c04c115914e1608b0fb0945cc0f1c8302cdcb2cc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Fair and Welfare-Efficient Constrained Multi-Matchings under Uncertainty
Accept (poster)
Summary: The authors propose group-fair multi-matching algorithms in the presence of uncertainty about the valuations of agent-item pairs. The algorithms fall under two families: Conditional Value at Risk (CVaR), and robust optimization. The types of utilities they consider are also two-fold: normal utilitarian, and egalitarian group-fairness. The latter seeks to maximize the utility of the group with the minimum utility. The authors begin with the robust optimization approach for overall utility maximization, where they derive max-min guarantees for the worst possible valuation within a particular valuation uncertainty set. Here, they propose solving a natural nested concave-convex program. Due to computational feasibility, they would like to utilize the structure of the specific problem in order to achieve a more efficient problem framing. To do this, they demonstrate that the nested program (eq 1) can be re-written as a single concave cubic program. Furthermore, the optimal assignment can be found by solving a particular system of equations with the optimal dual variables (Proposition 3.1). They also show that under some simplified assumptions on the uncertainty set containing the true valuation matrix, the optimal solution can be found by simply solving a LP or QP (Prop. 3.2, 3.3). The authors then move to robust optimization for egalitarian welfare. Here, the problem is more complex since the objective function loses its smoothness, and is now a nested max min min optimization problem. They make a key assumption on the independence of the uncertainty set w.r.t. the data groups, which helps them swap the minimum over groups and minimum valuation over uncertainty sets. By another dual derivation, one can solve the resulting problem with iterated quadratic programming (Prop. 3.4). Furthermore, the problem has the nice property that under simplified uncertainty sets, it still reduces to solving a single LP (Prop. 3.5). The authors also note that the problem can be decomposed in the case of monotonic welfare functions. The next topic covered is w.r.t. the CVaR approach, which assumes that the distribution over agent-item valuations is known to the mechanism designer. We begin with the optimal utilitarian assignment. Here, the optimal solution can be computed, but may be computationally infeasible for arbitrary distribution over valuation matrices $D$. Therefore, the authors adopt and approach where they sample many valuation matrices from the distribution, and solve a particular linear program which computes an empirical estimate of the \alpha-CVaR. Prior work shows that this is a consistent estimator as the number of samples goes to infinity, but they provide a concrete sample complexity bound for subgaussian distributions (Prop. 4.3), under assumption 4.2. When the distribution is normal, only the mean and covariance matter, and a particular QP can be solved (Prop. 4.4). Similar results are discussed for the group egalitarian assignment, in section 4.2. The authors close with a series of experiments on real AAMAS reviewer bid data, where they are able to demonstrate the effectiveness and scalability of their method to a modest sample size. Here, uncertainty is defined by a model fit to predict reviewer bids on unseen examples. Strengths: The problem considered by the authors, namely, understanding the role of uncertainty in matching algorithms, is important and well-motivated. Furthermore, their setup is extremely practical and testable. I enjoyed reading about the optimization tricks and problem specific structure used to solve each of the instances. I found the fact that the optimal assignment was given by a particular system of linear equations with the optimal dual variables very neat. Overall, I do not think that any other work in the literature has explored and proposed feasible algorithms for matching in the presence of uncertainty in such a variety of settings (robust optimization under both utility optimal and egalitarian matchings, similarly for CVaR). I also appreciate that the code for running all programs was included. It can be quite difficult to implement programs of this complexity in practice (which a skim of the implemented gurobi programs can verify). I encourage the authors to consider cleaning up and releasing the code as a package, there is certainly a dearth of available code for efficient matching algorithms online currently! Weaknesses: I believe that there are numerous opportunities to improve the quality of presentation in the main paper. If addressed, I plan to raise my score. Most importantly, Propositions 3.4, 3.3, 3.1 should — in my opinion — be stated informally in the main text. I don’t believe the reader will gain much by staring at the (impressive!) equations. Having the informal version retain the key takeaway — we can reduce to solving a system of equations with the optimal dual variables — would make it much easier to parse, and also emphasize the important (and cool) part of the proposition. In lines 25-30, you specify where uncertainty may arise from. I would also consider mentioning how you capture this uncertainty: namely, by modeling uncertainty over agent-item pair valuations. These valuations completely capture the “preferences” you discuss later in the paragraph. However, when I think of matchings and preferences, concepts like stability come to mind, which is a much harder constraint to satisfy than utility maximization. I detail more areas for improving clarity and preciseness of writing in the questions section of my response. Most are minor besides simplifying the main statement of the propositions. If accepted, I also recommend utilizing the additional space to discuss the experiments in further detail. In particular, is table 2 reporting normalized utilities? Why are most of the results 1.0? Is this to be expected? Technical Quality: 4 Clarity: 3 Questions for Authors: Q1. 113 “always the case that the constraints define a finite set such that |A| ∈ N.” Do you include the possibility that |A| = 0? Q2. 116 “Let u : A × $\mathcal{V}$ × G → R be an affine function mapping from allocations to utilities for each group.” I don't think $\mathcal{V}$ was defined before this statement, but am assuming that it refers to the set of all valuation matrices. However, in 131, you define $\mathcal{V}$ to be the uncertainty set containing the true valuation matrix with high probability. There is probably a cleaner way to get around this without overloading, since it seems the uncertainty set definition is more important for your discussion throughout. Perhaps simply define the utility function to map from an arbitrary collection of valuations to R, instead of the set of all valuations? Q3. 133 Not sure whether $W({u}(a))$ formally type checks properly, shouldn’t $u$ be a function of $V$ as well as the assignment $a$ (and also groups, as defined above?) Q4. 134 $D_v$ is a distribution over valuation matrices? This seems to be defined in the next paragraph 136-145. Is it possible to switch the order of these two paragraphs? Q5. 184 can probably do without the $\forall G \in \mathcal{G}$ in the sentence immediately following $w_G$. Also, I believe that the presentation would be enhanced by considering numbers to be not-bold (e.g. $w_G$, $u_{G_1}$, etc.), and to keep vectors bold (which the authors have already done). This would certainly help parse the notation, since there is quite a bit. Q6. 224: valuation matrices of each group are independent of each other. What are the practical implications of this assumption? What kind of problems are we not capturing by making this assumption? This should be stated clearly, and perhaps the assumption should be stated more formally. Q7. Equation (9), should the LHS term have an overset of $\tilde{v} \sim D_v$ for the CVaR_\alpha term? My understanding is that CVaR is w.r.t. a particular distribution $D$, not a particular draw of the random variable. Are you sweeping this into the CVaR term somehow? Q8. 326: “Section 5.1”, should this be table 2? Q9. How is adversarial welfare defined? The worst welfare within the uncertainty set? Q10. I do not believe that $W_{USW}$ or $W_{GESW}$ appear in the main paper, and hence, you may be able to remove these acronyms from section 2.1. Furthermore, throughout the paper, you seem to not use the acronyms until presenting the experimental results (which makes sense). Q11. 339: “Although the CVaR approach is less important at low noise levels, the CVaR of welfare decreases for both welfare measures as noise increases”. As someone not super familiar with CVaR, why is it less important at lower noise levels? Do we expect the noise to mainly impact the right tail or something like that? Q12. 178-180: rounding algorithms in the case of fractional solutions. Do you think you can expand a bit on how these randomized rounding algorithms apply to your specific experiments? How much of a difference do they make? Is there a Birkhoff-von-Neumann decomposition-type result for randomized-multi-matchings which can be used to sample? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed in the main body of the paper, the most salient of which is that that CVaR requires solving a large number of linear programs to obtain the desired guarantees, and hence can be slow in practice. I don't see this as that big of a problem, however, especially since in the proposed setting of reviewer assignment, only a single matching is constructed and utilized. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments and helpful suggestions. Please see our replies below. **Propositions 3.4, 3.3, 3.1 should be stated informally in the main text.** Thank you for the suggestion. We agree that the equations make it less readable. We will replace them with a informal statements describing the propositions, and move the formal statements to the appendix. **Mention how you capture this uncertainty.** Thank you for the suggestion. We will incorporate this in the main paper. **Why are most of the results 1.0? Is this to be expected?** We briefly mentioned on line 329 objective values are normalized by dividing by the maximum value of that objective per seed.” Table 2 reports means and standard errors over 5 runs. For each run, we execute 6 algorithms (the 6 rows in the table), and evaluate each of those 6 allocations on all 6 objective values. So we get five 6x6 matrices. For each run, we normalize each column by dividing all entries by the max value, so the maximum value is 1.0. We then take the element-wise means and standard errors of those 6x6 matrices over all 5 runs. We normalize in this manner because (a) the absolute optimal welfare differs across runs, and (b) we wanted to highlight the fact that the diagonal is expected to be (weakly) better than the off-diagonal in each run. **Do you include the possibility that |A| = 0?** Yes, our proposed techniques can handle this special case. However, we observe that in almost all allocation problems, we would have constraints on the number of items that can/must be assigned to an agent. **There is probably a cleaner way to get around this without overloading.** You’re right, this is overloaded. Thank you! We can just replace that with [0,1]^{n \times m}. **Shouldn’t $u$ be a function of as well as the assignment and also groups, as defined above?** We would like to clean up this notation a bit in general. It is cleaner to write that $\bf u$ is a vector-valued function giving the welfare for all groups under an allocation and valuation matrix, $\mathbf{u}: \mathcal A \times [0, 1]^{n \times m} \to \mathbb R^g$. On line 133 we will write $W(\mathbf u(\mathbf a, \mathbf v))$. **Are you sweeping this into the CVaR term somehow?** Yes, that’s right. We were “sweeping this into the CVaR term” as you say, but it would be much clearer if we include the overset. In fact, this is the same distribution over $\tilde{v}$ as in the $\mathbf{E}\_{\tilde{v} \sim D_v}$ operator on the RHS, so it should be written on the LHS too. We will do the same for the $\mathrm{CVaR}_\alpha$ and $\mathbb{E}$ expressions in Equation 12. **How is adversarial welfare defined?** We can include a sentence in the preliminaries explaining that once an allocation is fixed, we call the minimum welfare over the uncertainty set the “adversarial” welfare. **Why is CVaR less important at lower noise levels?** At low noise levels, the CVaR measures (and optima) for sufficiently large values of $\alpha$ will be fairly close to the central estimates (and optima based on them) of welfare. As the noise increases, the variance increases. This results in long-tailed distributions, and thus the CVaR optimizer will produce different solutions from the optimizer of the central estimate. By definition, the robust approach is more sensitive than CVaR, which explains the observed differences between them. For further details on how CVaR provides robustness against high risks in fat-tailed distributions, please refer to Example 2.1 and [1]. [1] R. Tyrrell Rockafellar and Stanislav Uryasev. (2002). Conditional Value-at-Risk for General Loss Distributions. Journal of Banking and Finance. **Can expand a bit on how these randomized rounding algorithms apply to your experiments?** Thank you for bringing up this point. Yes, indeed there is a generalization of Birkhoff-von-Neumann decomposition to multi-matchings [1]. [2] does a nice job of explaining its application in randomized reviewer assignments specifically. We should make this clearer in the paper, but we did not round the robust USW/GESW solutions reported in the experiments. In the case of the uncertainty-unaware USW/GESW and the sampling-based CVaR problem for both USW and GESW (the first 4 rows of all our tables), we were able to express these programs as MILP’s and directly solve for the optimal integer solution. We recommend using this approach whenever it is feasible. Theorem 14 of [3] demonstrates that although the rounded robust solutions may differ significantly from the fractional robust solutions, however, in the case of USW the welfare under the true valuations remains high after rounding. In our experiments, we do not assume access to ground truth valuations, so we cannot directly test this. Rounding often degrades the maximin objective value. In a simple analogy, imagine we have 2 coins. We have to pick one coin to flip, and we want it to be heads. An adversary can force one coin to always be tails. If we select a distribution over the coins, and sample from the distribution before the adversary selects their coin, we can obtain a non-zero probability of getting a heads. But it would be a very powerful adversary (perhaps unreasonably powerful) if they could see which coin we picked and force it to be tails. That being said, using the randomized rounding procedure with our methods results in allocations that are robust against adversaries in expectation. [1] Budish, E., Che, Y.K., Kojima, F., & Milgrom, P. (2009). Implementing random assignments: A generalization of the birkhoff-von neumann theorem. [2] Jecmen, S., Zhang, H., Liu, R., Shah, N., Conitzer, V., & Fang, F. (2020). Mitigating manipulation in peer review via randomized reviewer assignments. Advances in Neural Information Processing Systems, 33. [3] Cousins, C., Payan, J., & Zick, Y. (2023). Into the unknown: Assigning reviewers to papers with uncertain affinities. International Symposium on Algorithmic Game Theory, 16. --- Rebuttal Comment 1.1: Comment: The authors have addressed all my concerns and I have raised my score by one point. I found the submission very interesting, and think the paper will be a great contribution to the area! --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you so much for promptly reviewing our rebuttal and raising the score. We greatly appreciate it and are glad that you found our paper interesting!
Summary: This paper considers a new fair allocation problem. In the classical fair allocation problem, each item can only be matched with at most one agent, and the utilities are known prior. This paper considers a variant where each item is required to be matched with some number of agents. Agents can also be matched with some number of items. The utilities are not known prior. The problem is motivated by the paper-reviewer matching system. Each agent represents a reviewer and each item is paper. A paper is required to be assigned more than one reviewer. Each reviewer can be matched to more than one paper. In addition, each reviewer has a capacity that indicates the maximum number of papers assigned to her. The utilities are not known prior because the review quality is only known after the assignment. The main contribution of this work is that they designed an efficient method to optimize both the utilitarian and egalitarian objectives. The main technical tools come from the robust optimization area. In particular, they transform the problem into a pure linear programming problem via a series of propositions. Strengths: 1. The paper is well-motivated. I like the model studied in this paper. The reviewer-paper matching algorithm is important. I expect that the proposed algorithmic idea should have the impact of a positive practice. 2. The paper is well-written. I am a theoretical person, I appreciate that the theory part of the paper is carefully organized. The whole proof idea is clear to me. Weaknesses: I have to say that I am usually working on approximation algorithms and I am not in the right position to judge the technical novelty of these continuous methods. To me, the downside is that the running time of the proposed algorithm is high, especially since the algorithm is required to solve LP. This limits the application of algorithms. Technical Quality: 3 Clarity: 3 Questions for Authors: I don't have any specific questions. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The societal impact is not available. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! **To me, the downside is that the running time of the proposed algorithm is high, especially since the algorithm is required to solve LP. This limits the application of algorithms.** We acknowledge that some of our proposed algorithms have high runtimes. However, we would like to highlight the following points: 1. Optimizing fair welfare objectives under uncertainty is an NP-hard problem [3, 4]. Thus, obtaining exact solutions in polynomial time is not feasible. 2. Previous work in the literature on fair allocations and divisions has proposed methods such as MILPs [1, 2, 4] for solving fair allocation problems without considering uncertainty. Therefore, the runtimes of our solutions are comparable to existing methods. 3. Our iterated quadratic programming approach for robust measures with ellipsoidal uncertainty sets is significantly more efficient than the naive subgradient ascent method previously proposed, as shown in Figure 1. Finally, we recognize that our CVaR approach is limited by its sample complexity. For large instances of fair allocation problems, we recommend using the normal form of CVaR, which can be optimized using SOCP or Projected Gradient Ascent techniques. [1] Kawase, Y., Nishimura, K., Sumita, H. (2023). Fair Allocation with Binary Valuations for Mixed Divisible and Indivisible Goods. Arxiv. [2] Caragiannis, I., Kurokawa, D., Moulin, H., Procaccia, A., Shah, N., Wang, J. (2019). The Unreasonable Fairness of Maximum Nash Welfare. ACM Transactions on Economics and Computation, 7. [3] Cousins, C., Payan, J., & Zick, Y. (2023). Into the unknown: Assigning reviewers to papers with uncertain affinities. International Symposium on Algorithmic Game Theory, 16. [4] Peters, D., Procaccia, A.D., & Zhu, D. (2022). Robust Rent Division. Advances in Neural Information Processing Systems, 36. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for replying to my concerns. I still think that the running time restricts the application of algorithms. But I believe that the studied problem is very interesting. Considering that I am not able to judge the technical novelty of this paper, I will keep my score and confidence unchanged. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you again for your time and effort in reviewing our paper, and for both the positive comments and critiques.
Summary: This paper looks at the problem of computing resource-agent matchings under uncertainty in a group setting, where the actual valuations of agent-resource matchings are unknown. When the distribution of the valuation uncertainty is known, they look at stochastic optimization using Conditional Value at Risk (CVaR), and when a set of candidate valuations is known, they look at robust optimization through max-min optimization for worst-case valuation. They present methods to optimize for two objectives: utilitarian, where each individual is equally weighted, and egalitarian, where they look at only the group with the minimum utility. They consider uncertainty sets which are either linear or elliptical, and provide derivations of the optimization algorithm for each combination, reducing the solution to an LP in many cases, and with quadratic programming or sub-gradient ascent in others. They also show some empirical results on a reviewer assignment dataset. Strengths: The paper formalizes a set of methods to optimize the important problem of fairness and group-aware resource allocation under uncertainty. The paper is well-written, and sufficient description is given for readers to comfortably follow the theory developed. The solutions to the optimization problems are reduced to computationally tractable instances that are popularly solvable, and the analysis for both linear and quadratic uncertainty sets is a welcome addition. The empirical results also provide context on how well certain solution approaches scale. The approach seems to be novel, and encompasses a range of general use cases. Weaknesses: 1. One of the limitations I found in this paper was that the authors did not compare the uncertainty-aware solutions to other prior approaches at solving these problems: what benefit does this approach give us? There is no discussion regarding this. From the results, it seems the base methods (USW and GESW) are better than the Rob and CVaR variants. 2. The results do not show any indication of where the GESW optimization may be beneficial. Instead, it appears that the USW optimization supersedes the GESW optimization. If that is the case, the importance of the 'fairness' part of the paper is greatly diminished. The authors should preferably select an evaluation dataset where the approach can show its benefits, if any, to justify its inclusion. 3. As the authors state, the CVaR optimization is much slower. It also seems to require more information than the robust variant of the problem. Without a sufficient justification for its use, what is the merit of including it? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the benefit of using the CVaR and Robust variations of the optimization when the base optimization performs better in most cases? Except adversarial welfare for the robust optimization, there seems to be no meaningful benefit of considering uncertainty. 1.1. Unless I am mistaken, the baseline USW and GESW methods are supposed to be improved by considering uncertainty. If this is not the case, this needs to be mentioned in the paper and clarified. 2. Why would we need to select the GESW objective when USW performs better in almost all cases? What is the significance of the fairness aspect of the paper in this case? 3. Where can this be applied? Can you give some examples of allocation problems that are admissible based on the assumptions and limitations of the approaches (e.g. linear/quadratic valuations), and cases where it would not hold? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper addresses some limitations in the text. The authors could expand on the situations in which the proposed methods offer an advantage. The authors should also mention the possible negative side effects of using methods that do not lead to intuitive solutions like using CVaR (in Example 2.1, for example). This may lead to affected users perceiving unfairness or inefficiency when the metric is not easy to understand. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! Please see our replies below. **Compare the uncertainty-aware solutions to other prior approaches at solving these problems?** We first emphasize that while we present novel solutions, our primary contribution is suggesting new *objectives* for the constrained allocation problem, i.e., optimizing utilitarian and egalitarian welfares under uncertainty. To our knowledge, the only existing prior work that solves one of these objectives for the constrained allocation problem is [1], which optimizes utilitarian welfare under ellipsoidal uncertainty sets using a projected subgradient ascent method. We compare against this method in the right half of Figure 1. Any algorithm that optimizes the uncertainty-unaware objectives is also an appropriate baseline. Those objectives can be optimized exactly, and we did compare against those uncertainty-unaware optimal baselines (USW and GESW in lines 1 and 2 in each table, and the dashed lines in the left half of Figure 1). We note that [5] focuses on optimizing expected envy or the probability of envy under uncertainty in valuations, which is a fundamentally different objective and thus not comparable to our methods. This distinction also applies to the other prior works mentioned in the related works section. We are happy to compare to other relevant baselines if we are made aware of them. Our global response on "Technical challenges in combining fairness and uncertainty" also applies to this question. **As the authors state, the CVaR optimization is much slower. It also seems to require more information than the robust variant of the problem. What is the merit of including it?** The Conditional Value at Risk (CVaR) of a random variable $ X$ at the $\alpha$ percentile is the expected value of $ X $ below its $\alpha$ quantile. By optimizing allocations to maximize the CVaR of welfare, we ensure that the welfare value will be at least $ w$ with $ 1-\alpha $ probability, where $w$ is the CVaR of welfare corresponding to the CVaR-optimal allocation. Put simply, instead of picking an allocation that ensures a welfare of $\ge w$ for *every* allocation (as does the max-min approach), CVaR allows a more refined analysis, allowing stakeholders to ensure a welfare of at least $w$ for at least (say) $90\%$ of the possible valuations. Thus, the CVaR measure is a less conservative objective than the Robust measure and is widely used in Operations Research and Machine Learning literature [2,3,4] to handle fat-tailed loss distributions. Finally, we emphasize that we do not advocate for any specific uncertainty-aware objectives; we simply demonstrate how to achieve fair allocations under uncertainty using popular uncertainty-aware objectives. **What is the benefit of using the CVaR and Robust variations of the optimization when the base optimization performs better in most cases?** The uncertainty-unaware USW and GESW optimization performs very poorly on the robust objectives (the last 2 columns of the tables). In Table 1, the CVaR of USW/GESW is optimal under the uncertainty-unaware baselines, but this is likely due to the fact that there is not very much variance in the distributions we estimate. This is evidenced by Figure 1, where performance of the uncertainty-unaware solutions drops as noise increases. However, we do expect the diagonal entries of all tables to be at, or very close to, 1.0 – i.e., when you (approximately) optimize for some objective, you perform near-optimally on that same objective. As shown on the left of Figure 1, the value of the uncertainty-unaware USW and GESW solutions on the CVaR objective becomes worse as the level of uncertainty increases. In our setting, the CVaR measure is particularly useful when the distribution of welfare is long-tailed, where optimizing for the worst case results in an overly conservative allocation (low welfare). We will use our extra page to add this clarifying discussion in the paper. **Why would we need to select the GESW objective when USW performs better in almost all cases? What is the significance of the fairness aspect of the paper in this case?** Please see the section of the global response on "Different ways of grouping reviewers". **Can you give some examples of allocation problems that are admissible based on the assumptions and limitations (e.g. linear/quadratic valuations) of the approaches?** We refer the reviewer to Appendices B, D, and E for detailed discussions on allocation problems where our methods are applicable. Please see the global response and our response to Weakness 3 (W3) of reviewer hqEo for discussion on assumptions made in the paper. We also note that we have also included a discussion on the runtimes of our proposed algorithms in the global response. [1] Cousins, C., Payan, J., & Zick, Y. (2023). Into the unknown: Assigning reviewers to papers with uncertain affinities. International Symposium on Algorithmic Game Theory, 16. [2] Soma, T., Yoshida, Y. (2020) Statistical Learning with Conditional Value at Risk. Arxiv. [3] Rockafellar, R.T., and Uryasev, S. (2000). Optimization of Conditional Value-at-Risk. Journal of Risk, 2. [4] Stoyanov, S., Rachev, S., Racheva-Iotova, B., Fabozzi, F. (2011) Fat-tailed models for risk estimation. Journal of Portfolio Management, 37. [5] Dominik Peters, Ariel D. Procaccia, David Zhu (2022). Robust Rent Division. Advances in Neural Information Processing Systems. --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions and concerns. The new experiment included in the global response helps show the value of the GESW function. This should be included in the main paper. I am still not convinced about the need to include CVaR optimization. I agree that it allows better analysis, and it may be widely used, but what is the contribution it makes in the context of this paper? I understand that the paper's contribution is not in designing a better solution to an existing problem, but to define a new optimization. Yet, the benefits of using this optimization could be made clearer. From Figure 1, yes, the uncertainty-unaware solutions do become worse as the level of uncertainty increases, but so do the uncertainty-aware solutions (especially looking at USW, the difference is negligible). Can the authors comment on why this is, and perhaps give an example where considering uncertainty helps over the uncertainty-unaware approaches (specifically for CVaR)? --- Rebuttal 2: Title: Use-cases of CVaR and the negligible difference between CVaR and USW in Figure 1 Comment: Thank you for asking these important questions. The utility of CVaR is higher when welfare distributions exhibit a left fat-tail, meaning a greater probability mass is concentrated in the left tail. Unlike the robust (minimax) approach, the CVaR method—particularly at higher $\alpha$ values—balances between extreme pessimism and optimizing for the average performance. This is advantageous in allocation problems with high uncertainty, where worst-case optimization can lead to overly pessimistic and inefficient outcomes. For example, in public housing assignments, where families have uncertain preferences for various housing units, worst-case optimization might result in inefficient allocations, leaving many families dissatisfied. By focusing on the worst $\alpha$-percentile preferences through CVaR, allocations can be made more efficiently (while also being robust to uncertainty with high probability), thereby improving overall satisfaction. However, it's crucial to differentiate between when to apply CVaR and when the robust approach is more appropriate. The robust approach is better suited to scenarios with extremely high stakes, where any failure (no matter how small the probability of occurrence) is unacceptable—such as life-or-death situations (e.g., allocating medical supplies after a disaster). It is also effective in low-uncertainty contexts where optimizing for the worst case is reasonable and doesn't significantly reduce efficiency. There are several reasons why CVaR USW behaves very similarly to the uncertainty-unaware USW maximal in Figure 1. 1) When the valuations are sampled from independent Gaussian distributions, the USW is just the mean of independent Gaussian variables. The variance of the utilitarian welfare is $\\frac{\\sum_a\\sum_i\\sigma_{a,i}^2}{(nm)^2}$ where $\sigma_{a,i}$ is the standard deviation of the valuation of item $i$ according to agent $a$. Due to this the variance of the utilitarian measure is fairly low, and USW is a more stable measure compared to GESW. 2) We were sampling valuations from symmetrical Gaussian distributions and so the noise in the valuations was (mostly) getting averaged out. 3) We also had a large number of items with very small variance. In AAMAS 2015 and 2016, around 8-9% of the entries have variance less than 0.005. We verified that when we model valuations using a negatively-skewed Gaussian distribution with the same means and variances [1], we see increasing robustness of CVaR relative to uncertainty-unaware USW. The difference is sharper as the skew parameter gets more negative. For example, on the AAMAS 2015 dataset we tried the following experiment where we sample valuations from a skewed-Gaussian distribution with varying skew parameter. We optimize and evaluate for CVaR$_{0.3}$. | Skew | CVaR USW | USW | |:----:|:--------:|:----:| | -0.5 | 1.64 | 1.56 | | -1 | 1.45 | 1.21 | | -2 | 1.33 | 0.96 | | -5 | 1.29 | 0.84 | | -10 | 1.28 | 0.82 | [1] https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html Note that in practice, if the valuation matrix is not Gaussian-distributed, we can generate samples from its posterior distribution using Markov Chain Monte Carlo (MCMC) or Variational Inference (VI). --- Rebuttal 3: Title: Example where considering uncertainty (via CVaR approach) helps over the uncertainty-unaware approaches Comment: The following toy example further demonstrates the robustness of CVaR when the welfare distribution has a fat-tail. Consider a scenario with two agents and four items, where each agent must be assigned one item. Each agent's valuation for an item is independent and follows a skewed Gaussian distribution. The mean valuations for the items are represented by the following 2D array: $$ \\begin{bmatrix} 0.39 & 0.49 & 0.51 & 0.53 \\\\ 0.52 & 0.51 & 0.53 & 0.54 \\end{bmatrix} $$ The standard deviations for the Gaussian distributions of the four items are $[0.01, 0.04, 0.05, 0.09]$, with a skewness factor of 5 across all distributions [1]. We aim to optimize the utilitarian welfare, using CVaR$_{0.04}$ as our evaluation metric. This choice maximizes the expected utilitarian welfare over the worst $\alpha = 0.04$ quantile. To achieve this, we sampled 20,000 valuation matrices from the aforementioned distribution of valuations. We then applied three different optimization approaches: the CVaR approach at $\alpha = 0.04$, the Robust approach, and the Naïve approach that optimizes against the mean valuation. The following results were observed on a test set of 20,000 valuations sampled from the same distribution: **CVaR approach:** *Allocation:* $$ \\begin{bmatrix} 0 & 0 & 1 & 0 \\\\ 1 & 0 & 0 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ utilitarian welfare:* 0.985 **Robust approach:** *Allocation:* $$ \\begin{bmatrix} 0 & 1 & 0 & 0 \\\\ 1 & 0 & 0 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ utilitarian welfare:* 0.972 **Naïve approach:** *Allocation:* $$ \\begin{bmatrix} 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ welfare:* 0.947 **Observations** The naïve approach selects items 3 and 4, as they have the highest mean values, but it does not account for the uncertainty in the preferences. The robust approach, being more conservative, chooses items 1 and 2 due to their lower uncertainty. The CVaR approach strikes a balance between these two methods, selecting items 1 and 3. Item 1 has a lower mean value with low uncertainty, while item 3 has a higher mean value but with slightly higher uncertainty than item 2. We then repeat the experiment, this time optimizing for egalitarian welfare across agents, and evaluate the results by measuring CVaR$_{0.04}$ of the egalitarian welfare. **CVaR approach:** *Allocation:* $$ \\begin{bmatrix} 0 & 0 & 1 & 0 \\\\ 1 & 0 & 0 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ egalitarian welfare:* 0.47 **Robust approach:** *Allocation:* $$ \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ egalitarian welfare:* 0.38 **Naïve approach:** *Allocation:* $$ \\begin{bmatrix} 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix} $$ *Test CVaR$_{0.04}$ egalitarian welfare:* 0.45 **Observations** We notice a significant decline in the performance of the robust approach. This decline occurs because, although items 1 and 2 have lower uncertainty compared to items 3 and 4, item 1 has significantly lower uncertainty than item 2, resulting in better worst-case utility for item 1. Since the CVaR approach selected items 3 and 1, it achieves a higher CVaR of egalitarian welfare at $\alpha = 0.04$ compared to the robust approach which selects items 1 and 2. We also see fairly similar results when setting $\alpha=0.2$ with the other problem parameters remaining the same. [1] https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html --- Rebuttal Comment 3.1: Comment: Thank you for responding to my follow-up. The example is useful in helping me understand when uncertainty-aware solutions are better. I would recommend the authors to add an experiment that demonstrates it, to make a stronger case for their paper. Considering our discussion during this period, I will increase my score by one point. --- Reply to Comment 3.1.1: Title: Thank you! Comment: Thank you so much for promptly checking our response and raising the score! We sincerely appreciate your feedback and constructive criticism on our paper. We will incorporate the suggested changes in the camera-ready version.
Summary: The authors study a resource allocation problem where the objective is to optimize for efficiency and fairness under the presence of some uncertainty. The agents are partitioned into groups and the items need to be assigned so as to be fair to the groups. They study two maximization objectives : 1) a weighted sum of social welfare of groups and 2) the social welfare of the group with the least amount of welfare. They study two models of uncertainty over the agents' valuations of the items - the first is when there is some explicit uncertainty set and the objective is to maximize the worst-case outcome across this uncertainty set, and the second is when the distribution over the valuations is known and the objective is to maximize the Conditional Value at Risk (CVaR$_\alpha$). Their primary contribution is to present formulations of these problems as various mathematical programs. They make assumptions on the uncertainty sets so as to make these formulations suitable for optimization in certain cases. They present experiments supporting their models and formulations. Strengths: 1) Their experiments suggest that optimizing for CVaR has some benefit under a large uncertainty regime, as well as points to the benefit of considering the robust uncertainty model. 2) Their formulation and subsequent solution appears to be faster, and more successful, than applying the naive solver. Weaknesses: The paper is a bit terse, and hard to follow at times. It would be beneficial to add more details where necessary. For example, in Line 180, please mention the randomized rounding procedure or at least the properties of the output rounded solution. Ideally, there would at least be some more details to get to the propositions. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) Are there any tractable settings where the agents can be in multiple groups at once? There are many resource allocation, clustering, etc. papers where such settings are considered. 2) Can't the robust setting be viewed as a subcase of the CVaR setting where you maximize the worst (ie. the 0 percentile) outcome? Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! **Please mention the randomized rounding procedure or at least the properties of the output rounded solution.** Thank you for the suggestion. We will add more details on the rounding procedure in the camera ready version of the paper. Please see the final answer in our response to Reviewer UQiJ as well, where we discuss the rounding procedure and its relation to our experiments in a bit more detail. In general, we can use the extra page in the camera-ready to expand some details that were necessarily compressed for the submission. Reviewer UQiJ also had a great suggestion to move some of the formal theorem statements to the appendix, which should free up additional space for explanations. **Are there any tractable settings where the agents can be in multiple groups at once?** Our LP solutions for robust allocation with linear uncertainty sets and CVaR allocation (without normality assumption) for both utilitarian and egalitarian welfare can be trivially extended to the case where agents are in multiple groups. This is a good point in the case of robust GESW. However, the independence of groups assumption is not a fundamental limit, but rather a simplifying assumption. The subgradient ascent approach still works for the robust GESW allocation when groups aren’t disjoint. Because the minimizer will concentrate the uncertainty on a single group for GESW anyway, when we compute the minimizer at each step of subgradient ascent, we can just compute the minimizer for each group individually (even though the groups have some overlap). We agree the disjoint groups assumption can be limiting, but not unreasonable, in practical scenarios. Conferences often group papers into disjoint tracks and/or require paper authors to submit a unique primary subject area. Although they may have multiple secondary subject areas, the top-level grouping remains independent. We can handle grouping by secondary subject area using subgradient ascent as mentioned above, but it is also reasonable to use the primary subject area grouping and apply the faster approach we describe in our paper. We’ll include all this discussion in the paper. **Robust setting be viewed as a subcase of the CVaR setting.** Yes, it is well-known in the literature that the robust setting is a special case of CVaR where $\alpha=0$. However, using the CVaR approach proposed in Section 3 is inefficient because it requires a large number of samples to ensure that the uncertainty set implicitly constructed by CVaR captures the worst-case valuation matrix. Instead, we can directly incorporate the learned uncertainty set as a constraint and optimize against the worst-case model in this uncertainty set by solving a max-min optimization problem. This results in a more efficient algorithm that is not limited by the sampling complexity of CVaR. We note that prior works [1,2,3] in Distributional Robust Optimization have taken similar approaches when dealing with worst-case objectives. [1] Rahimian, H., Mehrotra, S. (2019). Distributionally Robust Optimization Review. Arxiv. [2] Lobo, E., Ghavamzadeh, M., Petrik, M. (2020). Soft-Robust Algorithms for Batch Reinforcement Learning. Arxiv. [3] Virginie Gabrel, Cécile Murat, Aurélie Thiele. (2014). Recent Advances in Robust Optimization: An Overview. European Journal of Operations Research
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your detailed and thought-provoking reviews. We have responded to most of your points individually, but a few points were worth addressing globally. **Algorithm runtime/scaling** We acknowledge that some of our proposed algorithms have high runtimes. However, we would like to highlight the following points: 1. Optimizing fair welfare objectives under uncertainty is an NP-hard problem [3, 4]. Thus, obtaining exact solutions in polynomial time is not feasible. 2. Previous work in the literature on fair division has proposed methods such as MILPs [2, 4, 5] for solving fair allocation problems without considering uncertainty. Therefore, the runtimes of our solutions are comparable to existing methods. 3. Our iterated quadratic programming approach for robust measures with ellipsoidal uncertainty sets is significantly more efficient than the naive subgradient ascent method previously proposed, as shown in Figure 1. Finally, we recognize that our CVaR approach is limited by its sample complexity. For large instances of fair allocation problems, we recommend using the normal form of CVaR, which can be optimized using SOCP or Projected Gradient Ascent techniques. **Technical challenges in combining fairness and uncertainty** Satisfying fairness notions under deterministic valuations, and optimizing under uncertainty are both rich problems in their own right; combining the two makes the problem that much more difficult. Optimizing the egalitarian welfare objective without uncertainty is already an NP-hard ILP [1]. Robust optimization of a linear objective with integer decision variables is already NP-hard in general as well [3]. When we combine the two, we have a constrained integer max-min problem that is difficult to solve for general uncertainty sets. We therefore must determine how the approaches common in the robust and stochastic optimization literatures adapt to the new objective, and if any simplified forms admit more efficient solutions. The main contribution of this paper is to identify instances of that problem that can be solved exactly or approximated efficiently. We demonstrate that for ellipsoidal and linear uncertainty sets, as proposed in distributionally robust literature, our problem can be simplified to more manageable forms. These can be efficiently solved using linear programming, iterated quadratic programming, or projected gradient ascent methods, as shown in Table 1. **Different ways of grouping reviewers** In the reviewer assignment problem, papers are the agents, and reviewers are the items. Therefore, we group papers rather than reviewers. Many conferences categorize papers based on their field or subfield of research. Ideally, each group of papers should receive a sufficient number of qualified reviewers, which we ensure via our proposed algorithms. We have implemented a simulated example where there are 2 groups and 1 group is disadvantaged compared to the other. We took the AAMAS 2015 dataset, set the original papers to be group 1, and created a second group of papers by randomly selecting $k$ of the papers. For these $k$ papers, we divided the copied valuations by a number $d>1$, and set to $0$ all but the top $b$ valuations per paper. The defaults were $k=150, b=5$, and $d=2$; we tried varying each of these keeping the other two fixed. For each setting we compute the % of relative loss in GESW incurred by the max USW solution, or $\frac{f-g}{f}$, where $f$ is the GESW of the max GESW solution and $g$ is the GESW of the max USW solution. We show in our rebuttal PDF that under this setting, optimizing for USW instead of GESW results in sharp decreases in GESW, and the difference gets sharper as $k$, $b$, or $d$ increase. **Assumption: Groups have independent uncertainty sets** This assumption is not a fundamental limit, but rather a simplifying assumption for some cases. Our LP solutions for robust allocation with linear uncertainty sets and CVaR allocation (without normality assumption) for both utilitarian and egalitarian welfare can be trivially extended to the case where agents are in multiple groups. The subgradient ascent approach still works for the robust GESW allocation when groups are not independent. Because the minimizer will concentrate the uncertainty on a single group for GESW anyway, when we compute the minimizer at each step of subgradient ascent, we can just compute the minimizer for each group individually. We agree the assumption can be limiting, but not unreasonable, in practical scenarios. Conferences often group papers into disjoint tracks and/or require paper authors to submit a unique primary subject area. Although they may have multiple secondary subject areas, the top-level grouping remains independent. [1] Garg, N., Kavitha, T., Kumar, A. Mehlhorn, K., & Mestre, J. (2010). Assigning papers to referees. Algorithmica, 58. [2] Caragiannis, I., Kurokawa, D., Moulin, H., Procaccia, A., Shah, N., Wang, J. (2019). The Unreasonable Fairness of Maximum Nash Welfare. ACM Transactions on Economics and Computation, 7. [3] Cousins, C., Payan, J., & Zick, Y. (2023). Into the unknown: Assigning reviewers to papers with uncertain affinities. International Symposium on Algorithmic Game Theory, 16. [4] Peters, D., Procaccia, A.D., & Zhu, D. (2022). Robust Rent Division. Advances in Neural Information Processing Systems, 36. [5] Kawase, Y., Nishimura, K., Sumita, H. (2023). Fair Allocation with Binary Valuations for Mixed Divisible and Indivisible Goods. Arxiv. Pdf: /pdf/57f68a7b14d7cb89c93c9f7c8dc45b78de2a60ad.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors of this paper investigate the fair multi-matching problem under uncertainty. Both stochastic and robust optimizations are considered to solve the proposed problem. Strengths: S1. Fairness is an important and practical concern in resource allocation problems. S2. The theoretical results of this paper seem to be correct. Weaknesses: W1. The authors fail to provide a clear motivation example of the proposed research problem in real applications. For the reviewer assignment application, I do not see strong reasons why we need to consider uncertainty (the reviewers have already revealed their preferences over papers clearly) and fairness (the utility of a reviewer is how the assigned papers match her/him?). W2. What are the major technical challenges of considering both group fairness and uncertainty in the resource allocation problem? Since there are works discussing each factor (fairness and uncertainty), can we just adapt existing techniques to solve the proposed problem? Are there any new technical challenges caused by the combination of fairness and uncertainty? W3. Does the process of obtaining the uncertainty set assume a true model (usually linear models in statistics) of the uncertain parameters? If so, I am afraid the way of considering uncertainty is still a toy research problem because in reality the data distribution is usually unknown and complicated. The uncertainty set itself may not be constructed in a reliable way. For example, under what conditions can the authors prove that their uncertainty sets described in Appendix D have the true parameters? To prove this, I guess the authors may need strong assumptions about the data distribution. W4. As the reviewer assignment datasets do not have groups of reviewers, the authors may want to try different ways of grouping reviewers in experiments to make the results more convincing w.r.t. group fairness. W5. It seems to me that the experiments are just numerical simulations where the objectives are simulated. This makes the research problem a toy one as no real datasets are used to verify the proposed method. Technical Quality: 3 Clarity: 2 Questions for Authors: W1, W2, W3 Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: W1, W3, W4, W5 Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We respond to your comments and questions below. ## W1 As mentioned on line 306, we adopt the model used by several major conferences: ICML 2022, AAAI 2022-2024, and IJCAI 2022-2024 [1]. In this model, papers are the agents, reviewers are the items, and the value $V_{a,i}$ of assigning reviewer $i$ to paper $a$ is estimated from multiple sources to predict the overall value to the conference of eliciting that review. As such, the reviewer assignment problem offers several sources of uncertainty. Reviewers offer extremely sparse partial rankings of papers, with a significant proportion of submissions remaining unranked by *any* reviewer [1, 3]. Affinity score computation systems (e.g. those generated by TPMS [4] and Open Review’s expertise model [5]) utilize NLP methods which have well-documented error rates [5]. [6, 7] both give strong overviews of the components that go into modern automated reviewer assignment systems, along with additional discussion of the noise inherent in the process. We can include all of this discussion in the camera ready. ## W2 Please see the global response. ## W3 Modeling uncertainty and constructing uncertainty sets are not contributions of this paper. Our main contribution is to show that for certain commonly used uncertainty sets (see, e.g., [2] for derivations), the problem of optimizing fair welfare objectives under uncertainty can be efficiently solved using our proposed methods. Cousins et al. [7] also provide more detail on constructing uncertainty sets in the same setting as the current work. That being said, Appendix D illustrates a simple example of how to construct an uncertainty set. The simplified bound in Appendix D assumes 1) that the validation set is drawn from the same distribution as the distribution under which we make assignments, and 2) that the cross-entropy loss is normally distributed. The first assumption, although quite strong, is fairly standard in machine learning (see Ch. 6 and 7 of Shalev-Shwartz and Ben-David [8] or Ch. 14 of Mitzenmacher and Upfal [9]). There are also ways to relax this assumption by modeling distribution shift or bounding the total variation distance between the distributions. The second assumption is fairly standard as well, as it holds in the limit by the central limit theorem. While there are certainly more accurate ways of estimating this generalization error, these improvements would only change the real-valued RHS of the inequality between lines 574-575 or add fixed multiplier terms on the LHS (more details in [7]). In both cases the structure of the optimization problem (our main focus in this work) remains unchanged. ## W4 Please see the global response. ## W5 We do try to generate results using settings that are as realistic as possible (e.g. basing our experiments on a mix of empirical data and simulated data), however they are ultimately not ‘the real deal’. We have spent a considerable amount of time and effort unsuccessfully trying to convince conference organizers to provide us with some access to reviewer data, with appropriate anonymity and experimental standards in place. Organizers’ reluctance to support meta-analysis of reviewer assignment is (to some extent) understandable: the stakes of conference reviewing are extremely high, and the organizers bear some overhead in supporting the experiments we want to run on real data. We also suffer from a chicken-and-egg problem: in order to make the case to conference organizers that our methods make sense, they need to be published in top-tier venues. In order to get our methods published in top-tier venues, we need access to data by conference organizers. ### References [1] Leyton-Brown, K., Mausam, Nandwani, Y., Zarkoob, H., Cameron, C., Newman, N., & Raghu, D. (2024). Matching Papers and Reviewers at Large Conferences. Artificial Intelligence, 331. [2] Gupta, V. (2019). Near-Optimal Bayesian Ambiguity Sets for Distributionally Robust Optimization. Management Science, 65. [3] Rozenzweig, I., Meir, R., Mattei, N., & Amir, O., (2023). Mitigating Skewed Bidding for Conference Paper Assignment. International Conference on Autonomous Agents and Multiagent Systems, 22. [4] https://torontopapermatching.org/webapp/profileBrowser/about_us/ [5] https://github.com/openreview/openreview-expertise?tab=readme-ov-file#performance [6] Shah, N. (2022). Challenges, experiments, and computational solutions in peer review. Communications of the ACM, 65. [7] Cousins, C., Payan, J., & Zick, Y. (2023). Into the unknown: Assigning reviewers to papers with uncertain affinities. International Symposium on Algorithmic Game Theory, 16. [8] Shalev-Shwartz, S. and Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms. [9] Mitzenmacher, M. & Upfal, E. (2017) Probability and Computing, 2nd edition.
null
null
null
null
null
null
Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning
Accept (poster)
Summary: This paper tackles Goal-conditioned Reinforcement Learning (RL) when used with Temporal Logic Objectives. The benefits of directly considering Deterministic Finite Automatons (DFAs) as the task definition leads to a new class of Compositional DFAs introduced that cover a conjunction of several tasks. These cDFAs can be encoded using a Graphical Neural Network that is pretrained on a novel task distribution (RAD). Experimental results show the approach generalizes to several task classes. Strengths: - Compositional DFAs are a more general form of multi-task descriptions. The proposed method is a straightforward way to encode these objectives and solve them. - The range of task classes considered is extensive and covers a variety of problem settings. Weaknesses: - The approach follows the similar structure as LTL2Action [41] but replaces the LTL Module with a different GNN (GATv2) and considers cDFAs or compositions/conjunctions of DFAs. The modifications w.r.t. LTL2Action apart from this and the pre-training step over a new randomly sampled (RAD) task distribution are novel yet arguably incremental. - Baselines appear lacking which bring into question the merits of the approach. A direct comparison to LTL2Action [41] and GCRL-LTL [32] using the monolithic DFA of the cDFA in the experiment could be possible and if not, the reason should be justified. - The paper is at times hard to follow and overloaded with notations and conventions. Some improvements could be made in the description of RAD pre-training (see questions) and the task nomenclature 1.X.1.X in the experiments. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Should the statement in L35 be better quantified or validated with references? "DFAs represent all temporal tasks that become Markovian by augmenting the state-space with finite memory”. Does this hold when Signal Temporal Logic (STL) is considered a temporal task language as well? 2. Is Reference [7] a concurrent submission or a typo? 3. In Figure 5, what are the black elements in the Legend? Why are they not present in the figures? 4. What do the envelope colors in Figure 2 indicate? Does the encoding generalize to conjunctions of the predicates on edges (e.g., Red & Green)? Are these considered during pretraining? 5. Defn. 4.1. could use some clarity. How is a random transition defined in a mutation? Is this simply changing the target node of a given transition to a random state in the DFA? What is being “minimized" here? 6. Minor grammatical error - Should DFA and DFA(s) be used interchangeably? I would think the following references to DFA should be changed to DFAs (L31, L33, L80, L85-87, and more) Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limits of cDFA encoding are not fully explored (mentioned). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: _Thank you for the time and effort put into your review._ ### Comparison to LTL2Action and GCRL-LTL. We provided an in-depth empirical comparison along with a detailed discussion with LTL2Action in Appendix C.8. We plan to use the extra space afforded by a camera ready to move it into the final version of the paper. ### Comparison to GCRL-LTL In the related work section of the paper (lines 94-100), we provided a discussion of GCRL-LTL which justifies why we did not compare against that method. Please see the **Baselines** section in the meta-rebuttal for a more detailed discussion. We will clarify this in the final version of the paper. ### Clarity, presentation, and notation Thank you for the feedback. We will make the necessary fixes pointed out by the reviewer and further simplify the presentation in the final version. ### DFAs represent all temporal tasks that become Markovian by augmenting the state-space with finite memory Thank you for bringing this up. We will clarify what this means in the final version as it is indeed confusing / ill-posed. Here, a task is taken to be a Boolean property of an episode: you either perform it or you don't. By making a task Markovian we mean that the state of satisfaction can be determined by just looking at the current state, e.g., did you reach a goal state? Importantly, we are assuming that each state is labeled from a finite set. Mechanically, one can then identify the DFA task from the transitions of the finite states since the properties are only defined over the labels of the finite states. ### Signal Temporal Logic (STL), finite memory, and DFAs The short answer with STL is no. STL is not generally representable with finite memory. A longer answer with STL is that it depends on the specific STL formula and the semantics applied to (i) time, (ii) the signals, and (iii) how finite truncations of the signals are handled. But, per the earlier point, if the STL formula has a finite memory monitor, then yes it can be encoded as a DFA. In such cases, the DFA often corresponds to syntactic progression as seen in LTL2Action. ### Is Reference [7] a concurrent submission or a typo?" Reference [7] is neither a concurrent submission nor a typo. It is a non-archival workshop paper that we have written. It successfully solves the reward sparsity problem with DFA rewards in off-policy RL algorithms using hindsight experience replay. ### In Figure 5, what are the black elements in the Legend? Why are they not present in the figures? Zooming into that figure will reveal that each point has a distinct shape representing the kind of operation applied to the sampled cDFAs. These shapes correspond to the black elements in the legend. We will make this clearer in the final version. ### What do the envelope colors in Figure 2 indicate? Those colors are just there to help track where the messages came from. We will make this clearer in the final version. ### Does the encoding generalize to conjunctions of the predicates on edges (e.g., Red & Green)? Are these considered during pretraining?" Our work does not treat symbolic relations on the edges, however we think it is an interesting direction. Conjunctions may potentionally handled with a many hot encoding, but this is purely speculation. ### Defn. 4.1. DFA Mutation Consider the corresponding adjacency matrix of a DFA, which is a 0-1 matrix indicating the transitions between states. We randomly sample an entry in this matrix and toggle that entry (if the entry is 0, make it 1; and if the entry is 1, make it 0). This is precisely what a mutation is. We will update the definition in the final version and make it clear. ### "What is being “minimized” here?" The DFA is being minimized. Specifically, the number of states is being minimized. Conceptually, this corresponds to merging states that are interchangable (have equivilent residual languages). This results in a canonical representation that is irreducable. See https://en.wikipedia.org/wiki/DFA_minimization for details. ### Should DFA and DFA(s) be used interchangeably?" Thank you for pointing out this inconsistency. They are indeed not interchangable and we will correct these abbreviations in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. The main contributions are clear (RAD pretraining/embeddings) and I understand how DFAs can encode many tasks captured by different temporal logics. I will raise my score accordingly since I beilieve the method presents a useful technique to generalize over logical tasks. Rather than a qualitative argument as to why other related approaches (GCRL-LTL) are different, I would have been interested to see simulations on a sample task/specification. Further, I believe the paper could use some presentation refinements (as suggested) and look forward to seeing the final version. --- Reply to Comment 1.1.1: Comment: _We thank the reviewer for their response and for raising their score._ We will make sure to include all of the suggested presentation refinements into the final version and believe it will significantly help the clarity of the paper. Space and time permitting, we plan on running empirical comparisons against the related approaches (GCRL-LTL) on the types of problems described in the meta-rebuttal to backup the qualitative argument.
Summary: This paper extends the framework of goal-conditioned reinforcement learning to support temporally-extended goals specified as (compositions of) deterministic finite automata (cDFA), avoiding the limitations of state-based goals and the ambiguities of natural language specifications. The authors introduce a graph neural network architecture for embedding cDFAs to vectors via message passing. They then develop a pretraining distribution for learning cDFA embeddings, which is constructed by leveraging the insight that paths in a DFA correspond to a series of reach-avoid tasks, motivating a distribution of cDFAs that are derived from sequential reach-avoid DFAs. In experiments, the authors show that pretraining on this distribution produces generalizable embeddings that capture the structure of DFA space, while also enabling the training of generalizable cDFA-conditioned policies. The authors run these experiments on two labeled MDPs, Letterworld and Zones, showing that embeddings or policies trained on only RAD DFAs or other DFA subsets often achieve high goal satisfaction rates even when presented with cDFAs with longer task lengths and higher composition numbers. cDFA conditioned policies using a frozen cDFA embedder trained on the RAD distribution also lead to high satisfaction rates across the board in Letterworld, with reasonable strong performance in Zones (a harder continuous MDP). Strengths: This is a very well-written and thorough paper that made for a pleasant and insightful read. Despite not working much with DFAs or GNNs, I found both the overall idea and the low-level details very easy to understand, because all key concepts are carefully but concisely explained, with figures that clearly illustrate what a cDFA is, how it can fed into a GNN, and what the structure of SRAs and RADs look like. The design of the RAD pretraining distribution was also very insightful, demonstrating careful thinking about what motifs ought to be repeated across a large class of DFAs (namely, reach-avoid tasks), and how that could be exploited to enable generalization. Experiments were very clearly laid out and thoroughly executed, covering a wide range of DFA classes to test generalizability, and demonstrating that the proposed method actually does lead to cDFA-conditioned policies that generalize quite well across the board. In terms of motivation and impact, I think it's great that people are exploring more expressive and well-defined notions of goals in the context of goal-conditioned RL. While working with formal specifications is not very popular in machine learning these days, I think a strong paper like this one helps to elevate the profile of using such specifications to enable more interpretable and generalizable RL systems. As such, I think this paper could have a decently large impact within the fields of both goal-conditioned RL and people working on formal methods in RL / ML. Weaknesses: As the authors themselves note, probably the main weakness / limitation of work that relies on symbolic specifications like cDFAs or LTL formulae is that it requires MDPs with a labeling function. This limitation isn't insurmountable --- for e.g., it should be possible to use computer vision algorithms to segment and label states of the world from pixels -- but in the final version of the paper, I think it'd be good to discuss this a bit more, since it might help readers appreciate how work like this could be applicable without a oracle labeling function. Another slight weakness of the paper is that most of the experiments focus on Letterworld, with weaker average performance in the more realistic environment of Zones. This raises questions about how applicable the method will be to more realistic tasks. With the extra page in the final version, I think it'd be good to show images of both environments, and examples of harder cDFAs in those environments, just to illustrate how difficult it might be to solve those tasks. Otherwise it might be easy to write-off e.g. Letterworld as a toy domain even though with cDFAs the tasks can be quite long-horizon. It might also be good to say a bit bout how performance could be improved for harder tasks like Zones. Finally, I think one thing that bears discussing in the final version is how this framework might enable safety/correctness of the policy, not just interpretability of the specification. While it's no doubt a safety benefit to have more interpretable specifications that say exactly what correct behavior is, this on its own doesn't guarantee the resulting learned policies are safe. I wonder if the authors could say a bit more on this point, perhaps by connecting to work on formal verification of NNs, or by combining cDFA-conditioned RL with constrained planning methods, using the learned policy to accelerate provably safe planning. Technical Quality: 4 Clarity: 4 Questions for Authors: There were a couple of minor things I didn't fully grasp, which it'd be great to clarify: - When constructing/sampling a SRA DFA (e.g. RHS of Figure 3) how are reach or avoid transitions created? Is the reach transition always a transition to the accepting state? Or is it to a a state that eventually leads to an accepting state? Is it the case that there is a single long path to the accepting state? And is the avoid transition always to a single sink state? I couldn't fully understand this from Appendix B. - In Figure 4, in the panel for Zones, where is the (non-frozen) GATv2 pretraining line baseline? And am I right to say that the RGCN baseline was not trained for Zones? - For each MDP with its own labeling function and alphabet, do you basically to do the RAD pretraining phase for specifically that alphabet of symbols? Or is there hope of training a cDFA embedder that generalizes across different MDPs with different labeling functions and alphabets? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately discussed a number of limitations of their work. It would be good to also discuss the fact that interpretable specifications (in the form of cDFAs) do not immediately lead to verifiably safe behavior. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: _Thank you for the time and effort put into your review._ ### Regarding the comment on the labeling function. We agree with the potential usage of computer vision algorithms to segment and label states of the world from pixels. We think there is a lot of interesting and fruitful future work that could address this limitation. We will use the additional space to comment on this issue and hint at possible directions for future work in the final version of the paper. ### Weaker performance in Zones. The performance difference between the Letterworld and the Zones environments, as the reviewer pointed out, is due to the fundamental difficulty that comes with continuous control. That is, in continuous domains, it is harder to learn control from an uncountable domain of inputs, and it takes longer to reach a goal, resulting in lower discounted return. As suggested, we will include images from both environments to highlight these points and the fundamental difficulty of continuous domains along with a discussion on how to mitigate these difficulties. ### Safety/correctness guarantees Thank you for the question. At this point, without further research, we believe it is hard to say anything precise about the safety of the learned policies. To the best of our knowledge, there are still scalability issues with the state-of-the-art NN verification literature; however, the constrained planning methods direction pointed out by the reviewer is definitely interesting and is part of our broader research agenda. Another [promising approach is also have the policy output a correct certificate, such as a reach-avoid martingale, to make the verification problem simpler](https://ojs.aaai.org/index.php/AAAI/article/view/26407). ### How are reach or avoid transitions created Conceptually, the reach/avoid transitions are indirectly constructed by first starting with a chain of N states and a failure state, FAIL: > $S_1 \rightarrow S_2 \rightarrow \cdots \rightarrow S_N$ For each state $1,\ldots,N$, the tokens are partitioned into reach/avoid/noop. The reach tokens advance that state ($S_i \rightarrow S_{i+1}$). The avoid tokens transition to FAIL. RAD is derived by mutating the transitions (toggling entries in the transition matrix) and minimizing. Conceptually, this acts to reintroduce features like cycles, skip connections that offer more than a single path, while avoiding fully connected ``random'' graphs. ### Question about Fig. 4. Due to time and compute. Running experiments in the Zones environment requires much more compute resources and time than Letterworld; therefore, we only run experiments for the best (frozen and pretrained GATv2) and baseline (GATv2 with no pretraining) configurations of the framework in the Zones environment. For the same resource reason, in Zones, we did not train the policies using embeddings produced by an RGCN model. We believe that the Letterworld experiments give enough evidence to conclude that embeddings of a pretrained and frozen GATv2 give the best performance in the continuous domain, but can include more experiments if it would be useful. ### Question on training a general cDFA embedder Thank you for the question. To answer this, first note that the pretraining is independent of the downstream application / MDP. Second, without environment semantics, the tokens are just labels that can be permuted, relabeled, etc. Thus, changing the labeling is as simple as deciding the index of each token. Therefore, one can pretrain a GATv2 model on RAD cDFAs with N symbols, and then use it in any environment and downstream application with up to N symbols. Of course, in order to use the embedding of the environment, one must somehow ground/order the tokens. In our experiments, this is implicit through the evolution of the embeddings (as the DFA progresses, we provide the corresponding embedding). In the future, one could imagine sending as side information an embedding of each token, but this is purely speculative. We will make it clear in the final version. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: Thank you for the answers to my questions. I continue to think this is a strong paper, and will be maintaining my score. Having read the other reviews, I also think it would be valuable to make the comparison with LTL2Action more explicit and move it into the main text. --- Reply to Comment 1.1.1: Comment: _We thank the reviewer for their response and for arguing that this is a strong paper._ We will certainly move the comparison to LTL2Action into the main text in the final version of the paper to make sure that it is clear to readers.
Summary: This work focuses on goal-conditioning policies using DFAs. This leverages the ability of DFAs to be composed - in this case the work focuses on conjunction. Two main pieces are introduced in leverging DFAs: 1) a GATv2 model which provides task embeddings from the DFA, 2) pre-training of the GATv2 on Reach Avoid Derived compositional automata which trains the embeddings with a basis representation in terms of the RAD embeddings which can then be used to represent other DFA embeddings. Once the DFA embeddings is obtained it is appended to the environment state space and used to train a policy. Experiments demonstrate the benefit of GATv2 over an alternative RGCN (when pre-training and not freezing the embeddings) and also demonstrate the benefit of pre-training the task embeddings, particularly when freezing the embeddings after. Strengths: # Originality The use of DFAs and similar representations of tasks is a popular idea in RL at the moment. However, this work focuses on the use of the DFA within a longer pipeline to obtain the embeddings. This use appears novel as most works I am aware of tend to use fairly literal and inflexible representations of the DFAs or state machines. This work then also puts some interesting pieces together - DFAs for task representations, GATv2 to represent a graph, RAD pretraining. Overall this all supports the novelty of this work and I do believe this is a strength. # Quality The aim and hypothesis of this work is clear. Each step of obtaining the task embedding is justified and validated with experiments. The ablation study does demonstrate the utility of both GATv2 and the pretraining from RAD tasks. Moreover the additional experiments and visualisations support the fact that GATv2 is learning to produce reusable and semantically meaningful task embeddings. # Clarity Overall the paper is well written, notation is used sparingly and intuitive. This is particularly useful as the combination of DFAs and MDPs can often become very notation heavy. I appreciate the effort placed in making this work clear. Images and visualization are also neat and clear and support understanding. # Significance Enabling generalization in RL, and specifically task generalization is an important problem. It has particular utility for enabling safe and adaptable RL agents. This work provides a new perspective and step towards this goal. I do think that this work would lead to future work and new ideas. Weaknesses: # Clarity Figure captions could elaborate more. As it stands they are vague and do not add any explanation or support understanding of the figure. Secondly, Section 3.1 is unclear due to the omission of an explanation for the DFA modifications. As a result it is unclear why any of these steps are needed, but this seems to be an important and early part of the pipeline. Thus, it leaves the entire pipeline feeling unclear or at least on an uneasy grounding. # Significance Unfortunately, I think the significance of this work may be its biggest weakness. Firstly, it is not clear to me that it is trivial to extend the findings on conjunctions will generalize to general boolean combinations as is mentioned in lines 144 and 145. Secondly, the curse of dimensionality which results from conjunctions of DFAs is never addressed or touched on. Perhaps the vector representation of task embedding does support generalization - and the visualizations would even support this - but it is not discussed. This is even more worrying when considering that a fixed dimension for the task embeddings is used. Thirdly, the experiments do support the answering of the research questions posed. However, the experiments do not compare to any other baselines from the literature, which makes it difficult to contextualize and build from this work. Finally, and I acknowledge that the authors point this out in the limitations section and so I will take this point lightly, but there is no insight into how the task embeddings support generalization. This does unfortunately limit the significance of the work. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How would the results of the generalized agents with task embeddings compare to specialized agents? 2. In Figure 4, why do the Zones results only have two models? 3. At a high level, how would you extend the approach to incorporate general boolean combinations If the authors could address my concerns with regard to the curse of dimensionality, and answer these questions I would likely advocate for acceptance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are mentioned and I do not see any omissions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: _Thank you for the time and effort put into your review._ ### Regarding Clarity of Sec. 3.1. We appreciate this feedback on clarity so we will include an explanation for the specific graph encoding we use. Essentially, we apply four operations on a DFA to construct a graph encoding: 1. Add intermediate nodes between states to encode edge constraint information. **Explanation:** We want edge constraints to be used during message-passing. By introducing these intermediate nodes with edge constraint information, we make sure that each node receives messages including edge constraints needed to get to the next states. 2. Reverse all the edges. **Explanation:** We want each state to receive information from future states since the goal is to find an accepting path in the DFA. 3. Add self-loops. **Explanation:** We want each node to receive messages from itself to avoid forgetting the local node message. It is a common practice in the GNN literature. 4. Add edges from the nodes corresponding to the initial states of DFAs to the “AND” node. **Explanation:** The "AND" node can be understood as a conjunction operator aggregating messages from each DFA in the composition. It is connected to the initial states of each DFA because we want this operator to receive information from the current state of each DFA. ### Curse of Dimensionality. By the curse of dimensionality, if the reviewer means the exponential blow-up caused by taking the conjunctive composition of DFAs, please see the cDFA section of the meta rebuttal which clarifies how we avoid such a blow-up. ### Comparison with other baselines Please see the baselines section of the meta-rebuttal. To directly address your question, we will move the experiments comparing our work with LTL2Action (given in Appendix C.8) to the main body of the paper to help further contextualize the proposed method. The key result is that our approach does indeed generalize better given the RAD pretraining. ### Insights into how the task embeddings support generalization. We definitely believe that further research is crucial to understand why and how the task embeddings support generalization. However, we believe this is a research question on its own and should be investigated in future work. Please also note that our embedding space analysis given in Fig. 5 in the main body and Fig. 9 in the appendix provide an intuition for the generalization results we see in the paper by visualizing the structured nature of the learned embedding spaces. ### How would the results of the generalized agents with task embeddings compare to specialized agents? In our experiments, we did not see any significant difference between RAD pretrained agents versus agents trained for a single task class when evaluated on the same task class; therefore, we omitted these results. However, we can include such a comparison in the final version of the paper. ### In Figure 4, why do the Zones results only have two models? Due to time and compute. Running experiments in the Zones environment requires much more compute resources and time than Letterworld; therefore, we only run experiments for the best (frozen and pretrained GATv2) and baseline (GATv2 with no pretraining) configurations of the framework in the Zones environment. For the same resource reason, in Zones, we did not train the policies using embeddings produced by an RGCN model. We believe that the Letterworld experiments give enough evidence to conclude that embeddings of a pretrained and frozen GATv2 give the best performance in the continuous domain, but can include more experiments if it would be useful. ### At a high level, how would you extend the approach to incorporate general boolean combinations? Please see the meta-rebuttal section on cDFA. --- Rebuttal Comment 1.1: Title: Comment by Reviewer Ww4e Comment: I thank the authors for their thorough response. * For the additions to improve clarity of Section 3.1. These additions are what I had in mind and will address my concern. * For the concern on the curse of dimensionality: I note that the cDFA means the exponential blowup in states does not explicitly occur. However, the vector representation of the task space must still be able to disambiguate the composed tasks from all other tasks. This is my main concern, that the fixed vector size will either need to be excessively large to begin with or will rapidly become exhausted. This is an empirical question of how quickly this vector space representation will become difficult to use due to the exponential increase in task space from composition. The cDFA point above does not seem to address this, unless I am missing something. * A comparison to LTL2Action seems useful and to my knowledge is the most appropriate baseline. This would correct the original omission in my opinion. * I agree that investigating the task embedding space would support future work. However, it is unfortunately related to my above point that it is unclear how quickly the empirical results of this work would decay without such an understanding. To me this isn't a difference between acceptance or rejection but does limit the significance of the work at the upper end. Particularly, because many benchmarks which do not use task embeddings are guaranteed to generalise arbitrarily. The task embedding having a theoretical limit in its representation is a weakness of the proposed method which is very difficult to assess - I admit this - but is still a limitation none the less. However, I acknowledge that this is pointed out by the authors and so I emphasise that I am not using this to decide on acceptance. * I think this concession on the zones experiment needs to be stated explicitly. This could also be of importance in assessing the scaling capacity of the proposed method. However, once again I can accept these concerns being left to future work as it is stated in the limitations section. * I acknowledge that I have read the reviewers comments in the general rebuttals. Overall I think that if the changes to clarity are made as proposed then I would be comfortable seeing the work accepted. Unless I am missing something, the score is limited due to the difficulty of assessing the scalability of the model which is a weakness unique to using task embeddings compared to approaches which do not such as those cited by Reviewer A6oy. This needs to be stated too. However, in acknowledgement of the proposed changes and some helpful clarification I will raise my score to a 6. However, I think the discussion with the other reviewers appear useful and given their in-depth responses I would defer the outcome to be weighted towards their reviews. Thus, I will be lowering my confidence. --- Reply to Comment 1.1.1: Comment: _We thank the reviewer for their response and for raising their score._ We are glad to hear that you agree that the comparison to LTL2Action is the most appropriate baseline and that the additional clarity in Section 3.1 addresses your concerns. We agree that further investigation into the task embedding space to understand its scalability (in more complex environments such as Zones and with larger tasks) is an exciting direction of future research. We will make sure to address these limitations and future directions clearly in the final version of the paper. It is true that the fixed representation size does provide a theoretical limitation on its representation capacity as the size of the task space increases. We have two reasons to be hopeful that this will not pose a major limitation in practice: 1. This exponential blowup is a theoretical limitation for any sequence modeling task and yet, fixed size embedding have proven to be successful in computer vision (e.g., CLIP) and natural language (e.g., word2vec) domains. 2. DFAs (compared to unstructured representations such as natural language) enable a natural finite approximation of the embedding space: _planning over the DFA up to a limited depth_. A fixed size embedding space might naturally converge on this approximation.
Summary: This paper considers the multi-task setting where each task is a temporal logic task specified by a conjunction of deterministic finite automata (cDFA). To address the sample efficiency and generalisation problems present in this setting, they propose a method for generating good cDFA embeddings which can then be used by a universal value function approximator to learn general policies. Precisely, they propose converting a given cDFA into a graph which can then be passed to a graphical neural network (GNN) like GATv2 to create a vector embedding of the task. Similarly to prior work that demonstrated the benefit of pretraining GNN embeddings on LTL tasks (LTL2Action), this paper also proposes the *reach avoid derived* DFA (RAD-DFA) task distribution to pretrain the GNN embeddings. They then demonstrate that using frozen pretrained embeddings leads to better performance than finetuning, and no pretraining leads to abysmal performance. Finally, they also demonstrate that their pretraining procedure leads to reasonable cDFA embeddings and help with generalisation. Strengths: ## Originality - This work combines three established directions of research, namely: temporal logic task specifications (in the form of DFAs specifically), goal conditioned RL (using UVFAs specifically), and graph neural networks (GATv2 specifically). - The intersection of these three subfields is of more recent interest and only just emerging in the literature. Thus, there is an element of originality in the specific manner in which they are combined in this work with the aim of improving sample efficiency and generalisation in RL. - Also the specific manner in which a GNN is pretrained to generate good DFA embeddings (by introducing the RAD pretraining task set) appears to be fairly new. ## Quality and Clarity - The work provides mathematical formulations for the main concepts needed to understand the approach, which aid the quality of the work. - Assumptions, while not formally stated are mentioned too. - The explanation of the framework is also detailed and coherent which makes the contributions of this work more apparent. - The paper includes a number of Figures which are helpful, clear and in the case of Figure 1, aids the message of the work quite a lot. - The paper includes a number of experiments that illustrate the goal embeddings learned by the proposed approach, and the resulting sample efficiency and generalisation over specific task distributions. ## Significance - Temporal logic instruction following is an important direction of work and has wide applications---for example in multitask, lifelong learning and safe RL. - So leveraging GCRL (with UVFAs) and pretrained GNN embeddings to improve sample efficiency and optimality---important issues in general in RL---for DFA specified tasks can be particularly impactful and widely adopted. In this sense, this work could guide future work and ideas. Weaknesses: # Major ## Originality - Overall, this work is very similar to LTL2Action (Vaezipoor et al. 2021) and does not deviate particularly far from the work presented there beyond considering DFA specified tasks instead of LTL ones, which is just a choice of task specification language (regular/DFA vs LTL/BuchiAutomata). Also the paper claims that LTL2Action is limited to "finite" LTL (line 82). It is not clear what is meant by "finite" here. If it means finite trace, then that is incorrect. LTL2Action is applicable to any LTL specification. - In addition, relying only on a UVFA for sample efficiency and generalisation is a widely known idea that has also been explored in prior works. While the exact implementation here may be new (such as the specific way the goal/task embedding is obtained), the ideas are familiar from a function approximation front. - The paper introduces the cDFAs, but this does not seem novel since they don't seem meaningfully different from DFAs and the distinction does not seem relevant for the proposed approach. Hence this just adds more notations and terminologies. Also, the footnote on page 2 says "With negations and disjunctions omitted as straightforward extensions". It is not clear how they are straightforward, unless cDFAs are indeed just DFAs as I mentioned previously. ## Quality and clarity ### Mathematical formulations - I found it very hard to fully understand the proposed framework and judge the soundness because there are a number of incorrect statements and missing information. - The MDP definition (Def 2.1) is wrong. An MDP is a five tuple M = (S,A,T,R,$\gamma$) where $R$ is the reward function and $\gamma$ is the discount factor (optional when $\gamma=1$). It is not a triple M = (S,A,T). The objective of goal-conditioned RL is not to maximize the probability of generating a path containing g (lines 127-128), but instead to maximise the value function (the expected cumulative rewards). - It is fine if the authors want to focus on the maximisation of the success probability, but then the paper needs to describe how that will be achieved in the RL setting defined. - The reward function and discount factor for the problem formulation are never defined. The only reward function mentioned in the mathematical formulations is the one used for pretraining the GNN. Since the experiments uses the same reward scheme for learning policies, I suggest the authors formally state that in their problem formulation. Additionally, Table 1 shows that the experiments use a discounting, meaning that the learned policies *do not* correspond to the maximisation of success probability. - The paper says that goals are defined in the augmented MDPs (line 164), but the DFA-conditioned policy uses the environment states and not the augmented ones (Def 3.1). - The paper uses message-passing neural network (MPNN) (line 173) as a core factor in their approach, but never define nor describe what these are. - The paper proposes cDFAs as a novel goal-representation for temporal tasks in goal-conditioned RL. Beyond the fact that I am not convinced that they are meaningfully different from DFAs, the paper gives no theory nor conducts any experiment to demonstrate that they "balance formal semantics, accessibility, and expressivity" as claimed in their first contribution. In general, since the authors claim it as a main contribution of this work, they should justify why it is better than other representations like RMs, LTL, GTL, TLTL, SPECTRL, etc. I think such rigorous justifications are not needed if they did not claim it as a contribution of this work. ### Experiments - It is not clear why the main paper contains no experiment comparing with prior works. The only such experiment is left to the appendix, and only uses LTL2Action as baseline which is not state-of-the-art (there is also no sample-efficiency comparison). Without state-of-the-art baselines like [1] or [2] in the main paper, it is hard to tell how good the proposed approach is. Given that [2] drastically outperforms LTL2Action in their experiments in the same domains, this work does not look promising as it only shows similar performance to LTL2Action. - The paper claims that the proposed approach is applicable to any DFA, but the DFAs used for pretraining the goal encoder (Algorithm 1) and the ones used for the experiments (page 7) only have one atomic proposition per edge (e.g. $red$, $square$, etc). These are the simplest DFA transitions possible. In general the edge of a DFA is a Boolean expression over the atomic propositions (e.g. $red \wedge \neg square$). It is unclear if the approach generalises to more complex DFAs (where the edges/transitions can be arbitrary Boolean expressions). - All the example DFAs used are generally not satisfiable in the environments considered since their states lack self-transitions (e.g. in Fig 1). Since all the task distributions used for the experiments are defined similarly, it is unclear why the results show high success rates on them (they should be generally unsolvable). This suggests that important details of the implementation of the approach for the experiments may be different from what has been described in the paper. - It is also unclear what the tasks sampled from the defined distributions look like. It would have helped if the authors included examples of the simplest and most complex DFAs sampled during training and during testing. - It is unclear why freezing the pretrained weights performs better than finetuning. This is counter-intuitive but is never explained in the paper. - Given how the performance of the proposed approach is heavily reliant on the goal embeddings obtained from pretraining (the performance is abysmal without it), it is concerning that the paper did not stress test it to understand when it fails and why. For example, what if the number of conjunctions goes up to 100 where each DFA has a task length that also goes up to 100 (relates to the temporal curse of dimensionality)? What if the DFA transitions are more complex, that is when they are arbitrary Boolean expressions over the atomic propositions (relates to the spatial curse of dimensionality)? ## Significance - This work does not make a very large step forward from prior work (the addition of state-aware planning), but it is one. So I do not want to be overly pessimistic on the front of significance, but it is necessary to note that the contributions of this work are relatively incremental. If I understood the paper correctly, the only main contribution is the approach for pretraining a *good* goal (DFA) embedding. Additionally Figures 20-21 also only shows similar performance to non-state-of-the-art prior work (LTL2Action). - The absence of theory does not help. Theory guaranteeing generalisation and convergence to optimal policies (as claimed) under some reasonable assumptions would have helped. The lack of experiments comparing with state-of-the-art baselines like [1,2] also does not help. - Relying only on a UVFA for generalisation has clear limitations (e.g. in terms of the spatial and temporal curses of dimensionality present in temporal logic tasks), since no optimality and generalisation guarantees can be given even in finite MDPs (like grid-worlds). This is why the field has largely moved away from such approaches. More recent works such as [1,2] (and the large majority of prior works like [3,4]) break down the problem into a set of reach-avoid sub-tasks (goals), then use planning (or HRL) to determine how to sequence those goals (addressing the temporal curse). These works use UVFA for the sub-goal policies only when the goal space is sufficiently large. When the goal space isn't too large, such as in the letter grid-world, these methods can still learn and generalise without relying on a UVFA (by learning the policies/options for each goal independently). - This reliance on end-to-end function approximation for generalisation hurts the significance of this work. [1] Tasse, Geraud Nangue, et al. "Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning." The Twelfth International Conference on Learning Representations. (2024) [2] Qiu, Wenjie, Wensen Mao, and He Zhu. "Instructing goal-conditioned reinforcement learning agents with temporal logic objectives." Advances in Neural Information Processing Systems 36 (2024). [3] Araki, Brandon, et al. "The logical options framework." International Conference on Machine Learning. PMLR, 2021. [4] Jason Xinyu Liu, Ankit Shah, Eric Rosen, George Konidaris, and Stefanie Tellex. Skill transfer for temporally-extended task specifications. arXiv preprint arXiv:2206.05096, 2022. # Minor - It is not clear what "concept class" means or how it is different from a "task class". Are they the same? - It is not clear what GATv2 in the caption of Figure 1 means. Please add in the caption that it is a GNN. - It is seems like "task" and "goal" are mean the same thing in this work (line 16), i.e. they mean a DFA. Is that correct? That wasn't clear. - Adding the cDFA in Fig 1 to Fig 2 will - There are a couple minor typos or writing mistakes. For example: - There are numerous places where shorthands like "i.e." and "e.g." are used (e.g. first paragraph of the introduction). It is not recommended in a formal writing. Please spell them out. - The sentence on Line 184 doesn't end. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weaknesses above. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed some of the limitations. However, there seems to be several other ones. For example, - Just like *some* prior works, the paper assumes that all tasks are defined over a fixed set of given atomic propositions. If the set increases, everything needs to be retrained. - The paper demonstrates their approach only on DFAs with one atomic proposition per edge. It is unclear how it will behave when the DFA transitions are more complex (i.e. arbitrary Boolean expressions over the atomic propositions)? - The paper relies on UVFAs to learn general policies. Hence, it is likely that it can only generalise to tasks *similar* to those it is trained on, and not to tasks that are *significantly* out of distribution. The paper does not investigate when the model fails to generalise and why. That would have helped the reader have some justified intuition of what types of tasks are *significantly* out of distribution for function approximation to fail for proposed approach. I recommend the authors carefully think of the various significant limitations of their approach, such as the one I mentioned above, and explicitly state and discuss them in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: _Thank you for the time and effort put into your review._ ### Regarding LTL2Action not being finite The reviewer is right in the sense that in the LTL2Action paper, the encoder could *mechanically* be applied to standard LTL, but there is no indication that this will work. **The reality is that the training code and experiments only consider finite LTL.** For example the [corresponding line in the LTL2Action repo](https://github.com/LTL2Action/LTL2Actionblob/485cbc1055dc9fbfbba50350c85de11fc1730540/src/ltl_wrappers.py#L98) shows that an LTL task is declared ``done'' when the underlying LTL formula cannot be ``progressed'' anymore. The LTL2Action paper leaves room for the idea of using rewards to encode infinite horizon tasks, but we note that this is an active research topic with some of the more [theortically sound and practical](https://proceedings.mlr.press/v202/voloshin23a.html) works on the subject post-dating LTL2Action. As such, we stand by our initial assement that LTL2Action should be compared against techniques for finite trace properties. ### DFAs vs cDFAs Please see the global rebuttal for an explanation of extension to negations and disjunctions. ### Comparison with GCRL-LTL Although the solved problems are similar, comparison with GCRL-LTL is an ``apples to oranges'' comparison as it is a hierarchical planning-based method that does not generate embeddings. These are fundamentally different approaches and the work on GCRL-LTL already provides a comparison. Such approaches will, of course, learn faster than an embedding space-based approach since they only learn the goals in the environment, and the temporal planning is done algorithmically. However, as pointed out by Reviewer bN73, we believe there are many interesting future works that could follow from our approach. Please see section **Baselines** in the global rebuttal for more details. ### Having Boolean expressions on edges Standard DFAs cannot have Boolean expressions on edges. Automata with Boolean expressions on edges are variants of symbolic automata, which is an extension that we leave for future work. ### Regarding self transitions As written on line 137 of the submission, for ease of notation, we assume _stuttering semantics_: "Omitted transitions are assumed to stutter, that is transition back to the same state". ### Why is freezing better? This is a good question. Our current hypothesis is that the RAD pre-training learns a sufficently good representation that introducing changes to it and/or computing the corresponding gradients ultimately makes performance worse. In particular, less of the gradient updates are directed at learning dynamics and instead try to overfit to the specific concept class. However, as this is pure speculation, we have left such discussion out of the paper and think its an interesting avenue for future research. ### Stress testing The issue with the proposed conjunction stress test is that the more DFA you have in a conjunction, the less likely that the sampled task is satisfiable since each DFA is essentially a constraint over the set of accepted behaviors, and as the number of constraints increases, it is more likely to end up with an empty set. So it is hard to say if the framework would break because of a limitation of the approach or because the sampled tasks are not satisfiable. Nevertheless, its an interesting question to design a stress test that approriately seperates the the failure causes. ### Comments on end-to-end function approximation We respectfully disagree that the community is moving away from end-to-end function approximation, particularly for reward specification. If anything, the introduction of strong multi-modal image/language embeddings such as CLIP have resulted in an explosion of work focused on end-to-end learning from embeddings of a task. They include using [sketches](https://rt-sketch.github.io/), [language corrections](https://yay-robot.github.io/), and of course [language instruction](https://llarva24.github.io/). Finally, while hierarchical planning is a powerful approach, as previously discussed, it is not a complete replacement for end-to-end function approximation. We believe that a focus on function approximation and embeddings for logical specifications not only aids generalization but also provides a path forward to integrate multiple modes (DFA embeddings, natural language embeddings, image embeddings) in machine learning models. ### On exposition and definitions Thank you for your feedback. As reviewer WW4E noted, combining formal objects like DFAs and MDPs can result in notation-heavy and unapproachable work. One technique we employed in the definitions of MDPs and goal-conditioned RL was a simplification of the essential components necessary to understand the material at hand. We will use the additional space afforded by the camera ready to remark that these are simplifications of the more general definitions as you note. For example, we indeed omitted the reward (as is sometimes done in inverse learning works) to emphasize that we are in the binary sparse reward setting. Similarly, because goal-conditioned RL reduces to maximizing reach probability, this is the definition we employ. Again, we will make this clear in future versions. Finally, we chose to omit definitions such as message passing NN and employ short hands such as "e.g.", "i.e." as they are standard within the NeurIPs community. - **Regarding Defn 3.1.** DFA-condition policies use both the environment state and the DFA embeddings. This pair corresponds to the augmented state space. We showed this by abusing the notation and writing $\pi : S \times DFA$. We will make it clear in the final version. - **Minor questions.** We use "concept class" and "task class" as well as "goal" and "task" interchangeably. We will clarify this and your other comments in the final version. --- Rebuttal Comment 1.1: Title: Comment by Reviewer A6oy Comment: Thank you to the authors for their detailed response and clarifications. I particularly appreciate the comments regarding the originality and significance of the proposed approach, and the clarifications regarding DFAs vs cDFAs. The intuition behind why frozen embeddings are giving the best results is also interesting. I think a detailed discussion of this will improve the quality of the paper, given that it is the second main contribution of the paper. I believe the RAD pretraining approach is interesting and as I mentioned in my original review, it could be a useful contribution to the field. But I am not fully convinced by the provided emperical evidence. I have the following main outstanding concerns regarding this paper: - GCRL+LTL/DFA and end-to-end function approximation (UVFA-only) methods do have their tradeoffs. As the authors have highlighted, GCRL based methods are often very sample-efficient but often suffer from suboptimality which requires planning while taking into account the environment states (e.g. using Dijkstra). Similarly, UVFA-only methods have the potential to learn optimal policies, but lack any optimality and generalisation guarantees due to the reliance on function approximation. Hence, it is important to understand emperically or theoretically when a proposed UVFA-only approach is likely to suffer from sub-optimality. My main concern here is that there's not enough investigation done on the tradeoffs induced by the RAD pretraining approach, and the reliance on UVFAs for generalisation over cDFAs. Such investigations are particularly important in RL research where the statistical significance of results is often very low (for example this paper shows results over only 10 seeds, which is common, but likely statistical insignificant). - Regarding DFAs with Boolean expressions on edges, I meant Boolean expressions over atomic propositions (AP). I.e. when your set of symbols is $2^{AP}$, as is commonly done. The experiments only have edges over AP, which is a severely restricted class of DFAs. - I see what you mean for the conjunction stress test. But you can still define such tasks. An example in the letter world is the conjunction of all 12*11 unique SRA tasks: Reach Letter$\_i$ while avoiding Letters$\_{j \neq i}$ . You can also have a sequencial stress test. E.g. SRA task of length k ∼ Uniform(1,100). - As I mentioned previously, it will help to include examples of the simplest and most complex DFAs from the defined distributions. I have increased my score to reflect the clarifications provided and my outstanding concerns.
Rebuttal 1: Rebuttal: _We'd like to thank the reviewers for their time and efforts. Below, are common points we'd like to emphasize across all reviews._ ## RAD Pretraining + Frozen Embeddings First, we wanted to highlight what we consider the most important contributions of our work: 1. The RAD pretraining. 1. The frozen encoder and RAD embeddings. Notably, RAD is a carefuly crafted task distribution that exploits DFA specific structure to encourage zero-shot generalization to downstream tasks. Reviewer bN73 put it well when they wrote: > [RAD is designed from] what motifs ought to be repeated across a large class of DFAs (namely, reach-avoid tasks), and how that could be exploited to enable generalization. We believe that our generalization results (Figure 6 and Appendix C.2-C.8), training with frozen embeddings (Figure 4), and embedding visualizations (Figure 5 and C.3) speak to the usefulness and generality of RAD pretraining. *We see RAD pretraining as a building block for other applications.* - **Multi-modal task descriptions:** One might consider combining multiple embeddings for control problems. For example using our RAD embeddings to specify a mission level task, preferences using natural language CLIP embeddings, and a grounding of atomic propositions/labels using [hand drawn sketchs](https://rt-sketch.github.io). - **Task retrevial:** In the future, one might train a natural language to RAD embeddings network to enable vector database retrieval of tasks that a robot is pre-authorized to perform. ## Baselines A common question raised by reviews (A6oy, 7HER) is about baselines. The only existing approach directly comparable to ours is LTL2Action, which we compare against in Appendix C.8. With the additional space, we will move this comparison to the main body of the paper. Several other works (e.g., [1,2,3,4] from reviewer A6oy and GCRL-LTL also mentioned by 7HER) pursue a totally different technique for goal-conditioning with logical specifications. These works use a hierarchical framework: classical planning techniques (e.g. Dijkstra’s algorithm) are used to plan over an automaton (or automaton-like structure) and trained neural policies accomplish the individual steps in the plan. While these works can also be used for satisfying logical specifications over MDPs, they differ in two critical ways: 1. They don't produce task embeddings. Our work is explicitly focused on generating useful task embeddings (see above for why), which we accomplish largely through RAD pretraining. 2. Hierarchical planning techniques (including all of the papers mentioned above) are generally sub-optimal. Figure 1 in the attached pdf shows a gridworld demonstrating the problem with hierarchy. The robot's task is to first go to orange and then green. A hierarchical plan would first attempt to reach orange in the quickest way, which would be going to square (a). However, because hierarchical plans are unable to account for the entire task and the dynamics of the MDP, they don't realize that this now takes them further away from the second part of the task, reaching green.The situation can be made substantially worse if (i) the non-Markovian reasoning requires reasoning even further into the future and (ii) the dynamics of the MDP were somehow irreversible so that choosing the wrong orange square makes it _impossible_ to then reach the green square. It is easy to construct such traps for current hierarchical approaches. By using task embeddings (our work and LTL2Action), policies reason about the _full_ task, and thus could avoid such traps. We will make these differences clearer in the final draft. ## Differences from LTL2Action As noted in the paper and by several other reviewers, the mechanics of our approach take inspiration from LTL2Action. Superficially, the main changes are the encoder (now based on GATv2) and the target representation (automata rather than logical sentences). A key argument of the paper is that by focusing on DFA, we can exploit the fact that (i) solving a DFA ``just'' corresponds to finding a satisfying path and (ii) that subproblem can itself be represented as a DFA. This is fundamentally not the case with **declarative semantics** such as temporal logic which are naturally represented as syntax trees and require non-local reasoning (does satisying one part of the subtree effect the satisfaction of the other?) As the next section emphasizes, cDFA offer a middle ground and support the non-temporal syntatic structure of the LTL while allowing DFAs to handle the temporal relations. ## cDFAs. The conjunctive composition (cDFAs) investigated in the paper joins the graphs of several DFAs into a single graph. This is done by adding an "AND" node that is connected to each DFA in the composition and avoids explicitly computing the composition of the DFAs (an exponential blow-up of the number of states). For example, given N DFAs each with M states, their conjunctive composition is a DFA with $O(M^N)$ many states whereas their cDFA representation (which does not compute the composition explicitly) has $O(M\cdot N)$ many states. The cDFA representation avoids *the exponential blow-up in the number of states caused by the explicit evaluation of the composition*; therefore, addresses the curse of dimensionality. **Arbitrary DFA compositions.** One can express arbitrary Boolean combinations of DFAs in a cDFA representation, using a CNF tree. Extending conjunctive composition to arbitrary Boolean combinations follows from three observations: 1. Any Boolean formula can be expressed in the CNF format. 2. We can combine DFAs in a CNF tree, where the "AND" node would again be the root node, i.e., level 0, the disjunction nodes would be in level 1, and DFAs would be in the leaves. 3. DFAs are closed under negation, and the number of states stays constant when a DFA is negated, so negations can be directly applied to individual DFAs. Pdf: /pdf/76036047d968684f93b268bf3aba384dcf0bc163.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IMAGPose: A Unified Conditional Framework for Pose-Guided Person Generation
Accept (poster)
Summary: This paper proposes a diffusion-based pose-guided image generation method. Specifically, given an image and a sequence of poses, this paper aims to generate images that follow the input poses and maintain the appearance of the input image. To capture the texture details of the input image, they propose to combine features from VAE and a pre-trained image encoder. They design an Image-level conditioning module, which combines four images into a joint representation. The images that need to be predicted are masked. By doing this, their method can generate 1-3 images in one forward step, or use 1-3 condition images in the same time. Strengths: 1. They propose to combine VAE and image encoder features as the condition, which helps the model to generate more details images. 2. Their Image-Level Conditioning design enables their models to condition on the dynamic number of input images. Weaknesses: 1. Limited novelty. 1) The main contribution of this paper is forming four image latents into a 4-grid join latent, which is very similar to [1]. 2) The design of Sec 3.2 combines the VAE feature with the feature from Dino, which is similar to Animate Anyone[2], which uses CLIP image features as the condition. 2. Lack of thorough comparison methods. Given an RGB person image $\alpha$ and a pose image $\beta$, this paper aims to generate a person that follows the pose image $\beta$ with the appearance of the $\alpha$, which can also be called pose transfer. Thus, this paper should also compare their results with pose-guided human pose transfer methods: Animate Anyone [2]. DreamPose [3], Follow your pose [4] …. Besides, this paper injects pose guidance in ControlNet way, they should also compare their performance with ControlNet. 3. Incorrect statements. In line 44-50, they claim that former diffusion-based generation using frozen image encoder pre-trained on general image dataset (not specific for person image), so these methods cannot extract texture detail information. However, Animate Anyone [2] trains a reference appearance net on large-scale human datasets, which can effectively encode detailed texture features. [1] Kara, Ozgur, et al. "Rave: Randomized noise shuffling for fast and consistent video editing with diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Hu, Li, et al. "Animate anyone: Consistent and controllable image-to-video synthesis for character animation." arXiv preprint arXiv:2311.17117 (2023). [3] Karras, Johanna, et al. "Dreampose: Fashion image-to-video synthesis via stable diffusion." 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. [4] Ma, Yue, et al. "Follow your pose: Pose-guided text-to-video generation using pose-free videos." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 5. 2024. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Since this paper forms four images into a joint feature map, the self-attention in SD can conduct feature transformation among each images. Why do you split the joint feature map and conduct temporal attention among them (Sec 3.4)? 2. This paper evaluates their method solely on person images with clean background. How does this method perform on images with complex, meaningful backgrounds. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Yes, they have provided discussions about limitations and negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Cm6N We thank the reviewer for the positive feedback and valuable comments. **Q1: Limited novelty. 1) The main contribution of this paper is forming four image latents into a 4-grid join latent, which is very similar to [1]. 2) The design of Sec 3.2 combines the VAE feature with the feature from Dino, which is similar to Animate Anyone[2], which uses CLIP image features as the condition.** **Response:** (1) We sincerely disagree with your viewpoint. First, our method is not merely a simple 4-grid join latent. Our primary innovation involves first merging all target images into a unified target, followed by a carefully designed masking strategy that randomly masks these images into a joint mask, an approach not mentioned in reference [1]. Furthermore, our proposed ILC module aims to adapt to various user scenarios by randomly injecting different source image conditions and incorporating source image-level conditions, which are entirely different from the video editing tasks, motivations, and methods described in reference [1]. More importantly, IMAGPose introduces two new task settings and attempts to unify them within a single framework, achieving competitive results in each setting. (2) Please refer to the **shared response** on "Differences with technologies like Animate Anyone." We have added and discussed these differences. **Q2: Lack of thorough comparison methods. Given an RGB person image and a pose image , this paper aims to generate a person that follows the pose image with the appearance of the , which can also be called pose transfer. Thus, this paper should also compare their results with pose-guided human pose transfer methods: Animate Anyone [2]. DreamPose [3], Follow your pose [4] …. Besides, this paper injects pose guidance in ControlNet way, they should also compare their performance with ControlNet.** **Response:** Thank you for your kind reminder. Following your suggestion, as shown in **Figure 2 of the submitted PDF file**, we have added comparisons with Animate Anyone [2], DreamPose [3], and ControlNet [5] on deepfashion dataset. It is important to note that we cannot directly compare our method with Follow Your Pose [4], as the input conditions for Follow Your Pose [4] are text and pose, and it does not support guided by image references. Quantitative and qualitative results demonstrate that IMAGPose achieves highly competitive performance. This is because references [2,3,5] focus on continuous pose-guided generation; our approach is specifically designed for scenarios involving non-continuous poses. Moreover, we wish to highlight that IMAGPose explores and unifies image generation across different user scenarios, including Scenario (1), generating one target image from one source image and one target pose; Scenario (2), generating one target image from multiple source images and one target pose; and Scenario (3), generating multiple target images from one source image and multiple target poses. However, references [2,3,5] are optimized and designed for a single scenario task, lacking the capability for unification and multitasking. For example, references [2-3] only support Scenario (3), and ControlNet [5] only supports Scenario (1). Neither accommodates the user's need to generate from multiple source images. |Methods| SSIM (↑)| LPIPS (↓) | FID (↓)| |----------|----------|----------|----------| | ControlNet| 0.6725| 0.2443 |15.8762| |DreamPose | 0.7161| 0.1694 | 8.2510| | Animate Anyone| 0.7386 | 0.1319 | 6.8024| |**Ours**|**0.7561** | **0.1284** | **5.8738**| **Q3: Incorrect statements. In line 44-50, they claim that former diffusion-based generation using frozen image encoder pre-trained on general image dataset (not specific for person image), so these methods cannot extract texture detail information. However, Animate Anyone [2] trains a reference appearance net on large-scale human datasets, which can effectively encode detailed texture features.** **Response:** We wholeheartedly agree with your perspective. As you mentioned, enhancing the model's ability to encode texture features [2] involves integrating an additional reference network and a large dataset. In other words, without these additions, it becomes relatively challenging. Following your suggestion, we have added a constraint requiring no additional modules and datasets. Thank you once again for your meticulous guidance. **Q4: Since this paper forms four images into a joint feature map, the self-attention in SD can conduct feature transformation among each images. Why do you split the joint feature map and conduct temporal attention among them (Sec 3.4)?** **Response:** Thank you for your insightful question. We split the joint feature map and apply temporal attention for the following reasons: (1) Splitting allows for a more detailed exploration of the temporal context of each image, helping to capture subtle changes. (2) This approach enhances the model’s ability to track and integrate dynamics across varying poses and scenes, which is crucial for accurately modeling scenarios involving extensive pose transitions or scene changes. (3) Joint features can mask or dilute individual image characteristics. Splitting the feature map allows for more targeted feature transformations, maintaining the integrity of each individual image. **Q5: This paper evaluates their method solely on person images with clean background. How does this method perform on images with complex, meaningful backgrounds.** **Response:** Please refer to the **shared response** regarding the "Results on Out-of-Domain Datasets." We have included additional visualization results on more diverse datasets. --- Rebuttal 2: Title: Seeking Further Feedback Comment: Dear Reviewer Cm6N: Again, thank you very much for the detailed comments. In our earlier response and revised manuscript, we have conducted additional experiments and provided detailed clarifications based on your questions and concerns. As we are ending the stage of the author-reviewer discussion soon, we kindly ask you to review our revised paper and our response and consider adjusting the scores if our response has addressed all your concerns. Otherwise, please let us know if there are any other questions. We would be more than happy to answer any further questions. Best, Authors --- Rebuttal Comment 2.1: Title: Let's engage in reviewer-author discussion Comment: Dear Reviewer Cm6N, We look forward to seeing your comments on the authors' rebuttal and any further clarifications you may need. Thanks --- Rebuttal 3: Title: I’m inclined to change my final review to Borderline Reject. Comment: Thank you for your detailed response. They claim to differ from [1] in several aspects. I appreciate their clarification, explaining that they can mask 1-3 images in the 4-grid setting, allowing for image generation beyond the two images used in [1]. However, it would be relatively straightforward to retrain [1] to ‘adapt to various user scenarios’ by simply modifying its masking strategy. Besides, I recommend the authors add some visualization results about how the self-attention in SD performs under the different masking strategies, \eg, the attention maps. Additionally, the four-grid setting reduces the resolution of each sub-image, as the generated sub-images are one-fourth the size of the original SD. Given that their focus is on image generation, which demands detailed, high-quality results, the output may not meet the desired level of quality. Finally, this paper presents a fairly complete work. Although the novelty of each module is relatively limited, when combined, it forms a cohesive and comprehensive study. Their response addresses some of my concerns, I am willing to change my final review to borderline reject. However, I still have reservations about the novelty of the work. --- Rebuttal 4: Title: Thank you and expect more disucssions ! Comment: Dear Reviewer Cm6N, We sincerely thank you once again for your detailed suggestions and encouragement, such as “**a fairly complete work**” and “**a cohesive and comprehensive study.**” These comments have significantly enhanced our work and inspired us to pursue further research! **Q1: it would be relatively straightforward to retrain [1] to ‘adapt to various user scenarios’ by simply modifying its masking strategy.** **Response:** we respectfully disagree. We believe this overlooks a key contribution of our work: proposing two additional extended tasks from real-world user scenarios. Existing methods typically require training three separate models for generating images in these three different scenarios. In contrast, our proposed IMAGPose, through the design of the FLC, ILC, and CVA modules, achieves unification and delivers competitive performance across all scenarios. Additionally, we want to emphasize that our approach **is not simply about modifying the masking strategy.** We introduced an additional set of combined images as a condition (i.e., concatenating four images along the height and width) and then masking certain sub-images. This differs from [1], which directly uses latent space representations. By using image-level conditions, we provide more contextual information. **Q2: Attention maps.** **Response:** Due to NeurIPS policies, we are unable to provide additional experimental results (including anonymous links) at this stage. However, we can describe the attention visualization: ‘split image’ pays more attention to texture details compared to ‘joint image,’ as the latter focuses on global consistency while the former emphasizes detail consistency between frames. We will include this visualization and analysis in the final version of the paper. **Q3: Additionally, the four-grid setting reduces the resolution of each sub-image, as the generated sub-images are one-fourth the size of the original SD. Given that their focus is on image generation, which demands detailed, high-quality results, the output may not meet the desired level of quality.** **Response:** we are concerned there might be a misunderstanding about the four-grid setting. In fact, our training and inference data are doubled in both height and width, meaning the output results (every sub-image) are the same size as the original SD outputs. Additionally, extensive qualitative, quantitative, and user studies have demonstrated the strong competitive performance of our method. **Q4: I still have reservations about the novelty of the work.** **Response: ** We are concerned there may be a misunderstanding regarding the novelty of our work. **(1) This paper is the first to define three different pose-guided image generation tasks based on real-world user scenarios. (2) IMAGPose is the first to attempt solving these three scenarios with a unified framework, and extensive experimental results demonstrate competitive performance. (3) We proposed the innovative IFC module and masking strategy design; the FLC module, which injects texture information with a simpler architecture compared to Reference UNet; and the CVA module, which ensures local fidelity and global consistency in person image generation.** **If you have any additional questions or anything you would like to discuss in more detail, please feel free to let us know (the Author/Reviewer discussion deadline of 08/13 is quickly approaching). We would be more than happy to discuss further and respond promptly.** Best, Authors --- Rebuttal 5: Title: Seeking Further Feedback, thank you! Comment: Dear Reviewer Cm6N, We have provided a detailed explanation of the novelty of our task, the feature-level conditioning (FLC) module, the image-level conditioning (ILC) module, and the cross-view attention (CVA) module, including how they differ from existing tasks and methods in terms of motivation and approach. More importantly, we achieved unification across three different scenarios and demonstrated strong competitive performance through extensive quantitative, qualitative, and user studies. **If you have any remaining concerns about the novelty, please feel free to share them with us. Thank you!** **Although the author-engaged discussion phase will be over by today, if you have any additional questions or open discussions, please don't be hesitant to leave more comments. We are always available at all time, to actively address any concerns or be prepared for more discussions. Your opinions are rather important for us to improve the work!** Best, Authors
Summary: This paper considers three different pose-guided image generation scenarios from a scene perspective and attempts to cover all scenarios using a unified framework. In my opinion, it is very insightful and inspiring. The proposed IMAGPose framework unifies all scenarios through several ingenious components, namely FLC, ILC, and CVA. The qualitative and quantitative evaluations, as well as the user study, demonstrate its strong competitiveness. Strengths: (1) Existing pose-guided image generation has only considered scenarios involving a single source image and a single target pose. However, this paper insightfully introduces two additional potential scenarios: providing multiple target poses at once and multiple source images at once. These assumptions are both reasonable and necessary. (2) The proposed IMAGPose framework addresses the needs of different scenarios through simple yet ingenious designs. For example, it directly uses VAE features to capture underlying texture details, learns image conditions implicitly by taking the image as input, and employs the CVA module's innovative setting of merging, splitting, and then merging again, which is particularly enlightening. (3) The figures and tables are well-organized and clearly presented. The experiments are comprehensive, the results are convincing, and the appendix provides detailed supplementary information. Weaknesses: (1) The Reference UNet [1] also uses features from both the VAE and image encoder. It would be beneficial if the authors could discuss the differences and advantages of IMAGPose in comparison to this approach. (2) In line 152, does the term "image encoder" refer to CLIP? Why not use the image encoder features from DinoV2? [minor] In line 184, "IFC" should be corrected to "ILC". [1] Li Hu et al. Animate anyone: Consistent and controllable image-to-video synthesis for character animation. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) The authors did not specify what type of pose encoder is used. Could you provide more details? (2) It is commendable that the authors have evaluated the speed for different SOTA models. However, I am curious about the training duration. [minor] Appendix Figure 17 needs some descriptive text. Although it is understandable based on the main manuscript, a good appendix should also have comprehensive captions. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have already discussed limitations and societal impact in the conclusion and appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer rV96: We thank the reviewer for their detailed feedback and encouraging comments. **Q1: Comparison with Reference UNet [1]** **Response:** Please refer to the **shared response** on "Differences with technologies like Animate Anyone." We have added and discussed these differences. **Q2:In line 152, does the term "image encoder" refer to CLIP? Why not use the image encoder features from DinoV2?** **Response:** In fact, the default image encoder is set to Dinov2-G/14. We have added the results of IMAGPose using various image encoders, and it is evident that Dinov2-G/14 performs better across all metrics. |Image Encoder| SSIM (↑)| LPIPS (↓) | FID (↓)| |----------|----------|----------|----------| | CLIP-B/14| 0.7516 | 0.1364 | 6.1342| | CLIP-bigG/14| 0.7548| 0.1331 | 5.9645| | CLIP-H/14| 0.7552| 0.1296 | 6.0231| | Dinov2-B/14| 0.7541| 0.1343 |5.9286| | Dinov2-L/14| 0.7556 | 0.1323 | 5.9432| | Dinov2-G/14|0.7561 | 0.1284 | 5.8738| **Q3: The authors did not specify what type of pose encoder is used. Could you provide more details?** **Response:** We apologize for any confusion caused. In fact, the pose encoder is implemented with several lightweight convolutional layers, similar to ControlNet. It is injected after the first convolutional layer of the Denoise UNet. **Q4: Training Time.** **Response:** About 45.3 (H) on 8 V100 GPUs. **[m1]: In line 184, "IFC" should be corrected to "ILC".** **Response:** We apologize for the error in the manuscript here and greatly appreciate the reviewer's patient reading. Furthermore, we have made every effort to check the revision thoroughly. **[m2]: Appendix Figure 17 needs some descriptive text. Although it is understandable based on the main manuscript, a good appendix should also have comprehensive captions.** **Response:** Thank you for your valuable feedback. We agree that providing comprehensive captions in the appendix can enhance readability and independent understanding. Following your suggestion, we will add detailed captions to the appendix in the revised manuscript. Thank you once again for your guidance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. This is a very interesting work, and it is the first to consider three distinct user scenarios: generating a target image from a source image and a target pose; generating a target image from multiple source images and a target pose; and generating multiple target images from a source image and multiple target poses. The paper offers valuable insights and practical significance. After reading the rebuttal and other reviews, the authors have successfully addressed my concerns and cleared up any misunderstandings. Notably, the unification of these different use cases in this paper is impressive, and I hold a very positive view of its novelty and contributions. I do not see any major weaknesses in this work, and I will be raising my score in support of acceptance. --- Reply to Comment 1.1.1: Comment: Dear Reviewer rV96, Thanks for your response. We are happy to see that our response can solve your concerns. The results and analyses corresponding to your questions further improve the quality of our work. Thank you! Best, Authors --- Rebuttal 2: Title: Seeking Further Feedback Comment: Dear Reviewer rV96, Thank you for your support and helpful comments. We've tried our best to address your concerns, and we hope our responses make sense to you. Importantly, we much value your comments and would be happy to discuss more. **Although the author-engaged discussion phase will be over by tomorrow, if you have any additional questions or open discussions, please don't be hesitant to leave more comments. We are always available at all time, to actively address any concerns or be prepared for more discussions.** **Your opinions are rather important for us to improve the work!** Thank you! Sincerely, Authors --- Rebuttal Comment 2.1: Title: Let's engage in reviewer-author discussion Comment: Dear Reviewer rV96, We look forward to seeing your comments on the authors' rebuttal and any further clarifications you may need. Thanks
Summary: This paper thoroughly analyzes and considers the application scenarios of pose-guided person image synthesis from the perspective of real-world significance. Author introduces previously unconsidered but intriguing scenarios and proposes the IMAGPose framework to unify different tasks. Comprehensive experiments, ablation studies, and user research conducte to demonstrate the effectiveness of the proposed method. Strengths: - The proposed task scenarios are highly insightful, with a clear and well-motivated approach. - The study's contribution of a unified conditional diffusion model to address different task scenarios is of significant value. - The evaluation is comprehensive, and the proposed method generally demonstrates superior performance compared to existing works, supported by user studies and clear visualizations. - The impact of different components is analyzed through ablation studies, further proving the effectiveness of the proposed technique and providing a better understanding of the method. Weaknesses: The IMAGPose framework heavily relies on the detection results from OpenPose. I am curious about how the performance of IMAGPose would be affected if OpenPose produces poor results. Do the authors have any solutions to mitigate this issue? Technical Quality: 3 Clarity: 4 Questions for Authors: - The design of FLC is interesting, as it uses almost lossless features from VAE as conditions. However, for pose-guided image generation, would it be better to add a text caption condition? For example, T2I-Adapter incorporates both text and image features through a decoupled cross-attention mechanism. How does this differ from author concatenation approach? - In ILC, mask strategy is a critical component for unifying different conditional generations. But I am slightly confused about the purpose of this binary mask. What are its benefits/aim? - I noticed that if multiple poses are given at once and these poses are very continuous, would this be similar to generating a video based on continuous poses, such as a dance sequence? In such a user scenario, what are the comparisons and advantages of IMAGPose? - Is there a trade-off between speed and efficiency? For example, if a user only wants to generate a target image based on a single pose, how should they proceed, and how should they make their choice? - In Figure 8, what does IMAGPose* refer to? - What is the guidance scale for CFG? - In Figure 9, does T1 (Default) refer to IMAGPose? --- After the rebuttal, I raised my score from **6 to 8** because this work provides a new perspective on traditional tasks (a novel user angle) and unifies the framework. The authors' efforts are commendable, and I believe this work deserves acceptance. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The authors addressed the limitations and the work does not have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer QBtt: Thank you for your review and insightful comments. We address your questions as follows. **Q1: The IMAGPose framework heavily relies on the detection results from OpenPose. I am curious about how the performance of IMAGPose would be affected if OpenPose produces poor results. Do the authors have any solutions to mitigate this issue?** **Response:** Thank you for your thoughtful suggestions. I am concerned that there may be a misunderstanding regarding our reliance on OpenPose detection results. As a crucial condition for guided image generation, Pose directly influences the outcomes, as you have described. We use the same OpenPose as all existing SOTA methods for a fair comparison. In other words, errors in OpenPose results affect our outcomes and other SOTA methods similarly. To mitigate inaccuracies in generated results caused by imprecise pose estimation, we plan to incorporate 3D prior information, such as depth maps and normals, in the future. **Q2: The design of FLC is interesting, as it uses almost lossless features from VAE as conditions. However, for pose-guided image generation, would it be better to add a text caption condition? For example, T2I-Adapter incorporates both text and image features through a decoupled cross-attention mechanism. How does this differ from author concatenation approach?** **Response:** Thank you for your recognition and praise. We believe it would be beneficial to add some textual captions to the existing task; however, it should be noted that this would constitute an entirely new task. In pose-guided person generation, the only permissible conditions are pose and reference images of the person to ensure a fair comparison. Moreover, the T2I-Adapter introduces conditions through a parallel decoupling text and images. In contrast, IMAGPose employs a serial approach and optimizes global and local image processing to ensure local fidelity and global consistency of the personal images. **Q3: In ILC, mask strategy is a critical component for unifying different conditional generations. But I am slightly confused about the purpose of this binary mask. What are its benefits/aim?** **Response:** We appreciate the reviewer pointing out this issue. In fact, we use binary masks to mark and distinguish the areas to be generated. Black pixels in the source image can easily mislead the model into identifying these areas needing generation. To address this, we employ a binary marking approach using all-zero pixels, which helps to clearly differentiate the areas to be generated, reducing model confusion and ensuring accurate area generation. **Q4: I noticed that if multiple poses are given at once and these poses are very continuous, would this be similar to generating a video based on continuous poses, such as a dance sequence? In such a user scenario, what are the comparisons and advantages of IMAGPose?** **Response:** Please refer to the **shared response** "Differences with technologies like Animate Anyone." We have added and discussed these differences. **Q5: Is there a trade-off between speed and efficiency? For example, if a user only wants to generate a target image based on a single pose, how should they proceed, and how should they make their choice?** **Response:** Thank you for your insightful question. IMAGPose achieves a good balance between speed and efficiency. We have added results of speed and SSIM, showing that IMAGPose processes nearly 8 times faster than PIDM and about 3 times faster than PoCoLD and CFLD, while demonstrating superior quantitative results regarding generation quality. Moreover, suppose users must quickly generate target images based on a single pose. In that case, we recommend using the IMAGPose* setting, which involves multiple replications of the target pose. If users prioritize higher generation quality, we suggest using the default IMAGPose setting, which entails multiple replications of the source image. Methods| Speed (s) | SSIM| |----------|----------|----------| |PIDM| 9.377| 0.7312| |PoCoLD| 4.762| 0.7310 | |CFLD|3.764| 0.7378 | |**Ours**| **1.236**|**0.7561**| **Q6-Q8: In Figure 8, what does IMAGPose\* refer to? What is the guidance scale for CFG? In Figure 9, does T1 (Default) refer to IMAGPose?** **Response:** We apologize for any confusion caused. In fact, once we have trained the IMAGPose model, we can use different testing configurations. For instance, when we have one source image and one target pose, we can replicate the source image three times, resulting in three identical source images and one target pose, the default IMAGPose setting. Alternatively, we can replicate the target pose three times, resulting in one source image and three identical target poses, which we refer to as IMAGPose*. Besides, the default value for CFG is set to 2.0. In Figure 9, T1 (default) refers to the default setting of IMAGPose, which involves replicating three source images. --- Rebuttal 2: Comment: Thank you for providing such a detailed response. The additional quantitative results you shared effectively addressed my questions. The extensive explanations and analyses further clarified the points I found unclear during my initial review. I also reviewed the comments from the other reviewers and your corresponding replies. Overall, I am very satisfied with your response and would like to raise my score and vote for acceptance. --- Rebuttal Comment 2.1: Title: Thanks Reviewer QBtt for approving our work Comment: Dear Reviewer QBtt, Thank you for your response. We're glad to see that our reply was able to address your concerns. We appreciate your help in improving our paper! **If you have any further questions, please don't hesitate to reach out. We will remain actively available to assist until the end of the rebuttal period. We look forward to hearing from you!** Best, Authors
Summary: The paper introduces IMAGPose, a unified conditional framework designed to overcome the limitations of existing diffusion models in pose-guided person image generation. Traditional models primarily focus on generating a target image from a single source image and a target pose. IMAGPose extends this capability by addressing two additional scenarios: generating multiple target images with different poses simultaneously and generating target images from multi-view source images. The proposed framework incorporates three key modules: - Feature-Level Conditioning (FLC): Combines low-level texture features from a VAE encoder with high-level semantic features from an image encoder. - Image-Level Conditioning (ILC): Aligns images and poses through a masking strategy, supporting variable numbers of source images and poses. - Cross-View Attention (CVA): Ensures local fidelity and global consistency by decomposing global and local cross-attention. Extensive experiments demonstrate the framework's ability to produce consistent and photorealistic images under various challenging user scenarios. Strengths: - The paper introduces a unified framework that extends the capabilities of existing diffusion models, addressing important user scenarios that were previously overlooked. - The experimental results are robust, demonstrating the effectiveness of the proposed framework across multiple datasets. Weaknesses: - The framework's complexity, with multiple conditioning modules and attention mechanisms, may pose challenges for real-time applications and require significant computational resources. - While the framework shows promising results on specific datasets, it would be beneficial to see its performance on a broader range of datasets and in more diverse scenarios. - The use of frozen VAE and image encoders may limit the adaptability of the model to specific tasks, potentially impacting the quality of the generated images. - The paper could benefit from more detailed ablation studies to better understand the contribution of each module within the framework. Technical Quality: 3 Clarity: 3 Questions for Authors: - Can the authors provide more details on the computational requirements for training and inference using IMAGPose? Specifically, what are the resource constraints, and how do they impact the practical usability of the framework? - How does the framework perform on datasets outside of DeepFashion and Market-1501? Are there plans to test IMAGPose on more diverse datasets to evaluate its generalization capabilities? - Can the authors elaborate on the potential impact of using fixed encoders (VAE and image encoders) on the quality and flexibility of the generated images? Are there scenarios where fine-tuning these encoders could be beneficial? - The paper mentions that IMAGPose currently supports generating up to four images simultaneously. Does this indicate computational resource constraints? How might these constraints affect large-scale image generation tasks, particularly for high-resolution or complex scenes? - IMAGPose introduces three core modules (FLC, ILC, CVA), each adding to the model's complexity. How does this complexity impact the training and inference process, and does it increase implementation and debugging difficulty? - The paper uses frozen VAE and image encoders for feature extraction. While this reduces training time, are these encoders fully suitable for the specific task demands? Could this approach limit the quality and diversity of generated images? - Although IMAGPose aims to unify different user scenarios, are there specific or special requirements that still need additional adjustments or extensions? Could this general approach introduce limitations in some applications? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The paper could discuss potential biases in the generated images and how the framework handles diverse demographic attributes. - A discussion on the limitations related to real-time applications and potential solutions to mitigate computational overhead would be beneficial. - While the societal impact of misuse is mentioned, a more detailed discussion on the ethical implications and safeguards for responsible usage could strengthen the paper. - The following related work is recommended for citation & discussion: Zhao, B., Wu, X., Cheng, Z.-Q., Liu, H., Jie, Z., & Feng, J. (2018). Multi-view image generation from a single-view. In Proceedings of the 26th ACM International Conference on Multimedia (pp. 383-391). Huang, S., Xiong, H., Cheng, Z.-Q., Wang, Q., Zhou, X., Wen, B., Huan, J., & Dou, D. (2020). Generating person images with appearance-aware pose stylizer. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020). Liu, H., He, J.-Y., Cheng, Z.-Q., Xiang, W., Yang, Q., Chai, W., Wang, G., Bao, X., Luo, B., & Geng, Y. (2023). Posynda: Multi-hypothesis pose synthesis domain adaptation for robust 3D human pose estimation. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 5542-5551). Tu, S., Dai, Q., Cheng, Z.-Q., Hu, H., Han, X., Wu, Z., & Jiang, Y.-G. (2024). MotionEditor: Editing video motion via content-aware diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7882-7891). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 1dhK: Thank you very much for your support and constructive suggestions.We are glad to see the positive assessment of our paper and appreciate the detailed feedback. **Q1&Q4&Q7: (1) Computational Requirements and Resource Constraints?(2) Impact on Practical Usability,such as high-resolution or complex scenes. (3) Special Requirements and Potential Limitations** **Response:** Thank you for your valuable feedback on our paper. (1) Based on your suggestion, we have provided detailed information regarding the computational requirements and memory overhead for training and inference. | Training Memory (G) | Training Time (H) | Testing Memory (G) | Inference Time (s) | |----------|----------|----------|----------| | 28.3 x 8 GPUs| 45.3 | 14.8 x 1 GPU | 1.236| Additionally, the resource constraints refer to our use of V100 GPUs. Users may encounter memory constraints if they wish to generate more than four images at once. However, we can address this by employing an autoregressive approach, allowing users to create more than four images sequentially. (2) For practical usability, IMAGPose is flexible in generating images for complex scenes. However, GPU memory limitations indeed affect high-resolution image generation since we use SD as the base model. This is a common issue for all SD-based models. Furthermore, recent advancements such as FouriScale and DiffuseHigh have effectively addressed this problem, allowing images generated by SD-based models to be easily upscaled to high resolutions. Additionally, IMAGPose can utilize other excellent base models, such as PixArt-α, to avoid memory and resolution constraints. However, we used SD as the base model for fair comparison purposes since most SOTA methods are based on SD. (3) This question is fascinating and essential. As you mentioned, there are specific scenarios (such as when generating images based on a single source image and a single target pose) where IMAGPose requires special settings. Specifically, IMAGPose needs to replicate the source image or the target pose three times to meet the model's input requirements. However, this hardly incurs any additional memory consumption or speed overhead. In conclusion, IMAGPose is not limited by different application scenarios. **Q2: Results on datasets outside of DeepFashion and Market-1501** **Response:** Please refer to the **shared response** regarding the "Results on Out-of-Domain Datasets." We have included additional visualization results on more diverse datasets. **Q3&Q6: Do frozen encoders and VAE limit the quality and diversity of generated images, and could fine-tuning these encoders be beneficial in certain scenarios?** **Response:** We genuinely appreciate this question and are keen to discuss it further. Using frozen encoders and VAE does have its trade-offs. While it significantly reduces training time and computational resources, it can potentially limit the quality and diversity of the generated images. Frozen encoders, trained on general datasets, might not capture task-specific nuances as effectively as fine-tuned encoders. Fine-tuning these encoders could indeed be beneficial in certain scenarios. For instance, fine-tuning could enhance the model's ability to capture specific texture details and improve overall image fidelity in applications requiring high precision and detail, such as fashion or face. This approach would allow the model to better adapt to the unique characteristics of the target domain, thereby improving the quality and diversity of the generated images. However, it is essential to note that fine-tuning comes with increased computational costs and requires more extensive datasets. In our case, we propose the FLC module, which combines features from both CLIP and VAE, to mitigate some of these limitations while maintaining efficiency. Therefore, IMAGPose adopts this balanced and acceptable design. **Q5: How does this complexity impact the training and inference process, and does it increase implementation and debugging difficulty?** **Response:** In fact, we employ a straightforward and effective two-stage training strategy. In the first stage, we train the diffusion model incorporating the FLC and ILC modules. In the second stage, we only train the CVA module. IMAGPose operates as an end-to-end model during inference, minimizing potential debugging difficulties. More importantly, upon acceptance of the paper, all training and testing codes and checkpoints will be made available. **L1&L2&L3: (1) Discuss potential biases and how to handle it? (2) Real-time application limitations and how to mitigate computational? (3) Safeguards for Responsible Usage** **Response:** Thank you for your thoughtful reminder. Based on your suggestion, we have added the following discussions. (1) The performance of the model is subpar for generating cartoon characters and non-photorealistic styles because our training data consists of photorealistic human images for fair comparison. In future work, we plan to include a broader range of data and design style transformation modules to overcome this bias. (2)To address the limitations of real-time applications, we propose several potential solutions. Model Optimization (such as model pruning, quantization, and distillation) and asynchronous processing (Implementing asynchronous processing pipelines where image generation is precomputed or processed in parallel). (3)To mitigate ethical risks, we have added several safeguards for responsible usage, including transparency and disclosure, usage policies, collaboration with regulators, and detection tools. **L4: The following related work is recommended for citation & discussion:** **Response:** Thank you for recommending the relevant work. Following your suggestions, we have referenced and discussed the literature you listed. Due to space limitations, we will provide specific references and discussions during the stage of our discussion. --- Rebuttal Comment 1.1: Title: Add related work Comment: Dear Reviewer 1dhK: We have excerpted the following additional literature discussion: For example, VariGANs [1] combines variational inference and generative adversarial networks to generate multi-view images from a single image, achieving refinement from coarse to fine. APS [2] effectively generates human images by gradually coupling the target pose with the appearance of the constrained person. Given the lack of 2D-3D pose correspondences in the target domain training set, PoSynDA [3] simulates the 3D pose distribution in the target domain, effectively filling the gap in data diversity. MotionEditor [4] introduces a dual-branch structure that queries keys and values from the reconstruction branch in a decoupled manner, preserving the original background and the main character's appearance, thus supporting effective content editing. --- Rebuttal 2: Title: Seeking Further Feedback Comment: Dear Reviewer 1dhK: Again, we sincerely appreciate your detailed suggestions and encouragements, such as "addressing important user scenarios that were previously overlooked", "experimental results are robust", and "effectiveness of the proposed framework ", which have greatly improved our work and inspired us to research more! Then, in our earlier response and revised manuscript, we have conducted additional experiments and provided detailed clarifications based on your questions and concerns. As we are ending the stage of the author-reviewer discussion soon, we kindly ask you to review our revised paper and our response and consider adjusting the scores if our response has addressed all your concerns. Otherwise, please let us know if there are any other questions. We would be more than happy to answer any further questions. Best, Authors --- Rebuttal Comment 2.1: Title: Let's engage in reviewer-author discussion Comment: Dear Reviewer 1dhK, We look forward to seeing your comments on the authors' rebuttal and any further clarifications you may need. Thanks --- Rebuttal 3: Comment: Dear Reviewer 1dhK: Thank you again for your detailed comments. In our previous response and revised manuscript, we have conducted additional experiments, expanded the discussion of related work, and provided detailed explanations to address your concerns and questions. As we are nearing the end of the author-reviewer discussion phase, we kindly ask you to review our responses. If our replies have addressed all your concerns, please consider adjusting your score. Otherwise, if there are any remaining issues, please let us know. **We would be more than happy to answer any further questions.** Best, Authors
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their helpful feedback and insightful comments. We are glad that the reviewers find our paper “ *highly insightful* ” (**QBtt**), “ *clear and well-motivated* ” (**QBtt**), and “ *simple yet ingenious* ” (**rV96**), “ *enlightening* ” (**rV96**) and “ *well-organized and clearly* ” (**rV96**), ”*addressing important user scenarios that were previously overlooked* ”(**1dhK**). Also, our experiments are considered “ *reasonable and necessary.* ” (**rV96**), “ *robust,* ” (**1dhK**), and “ *generate more details images* ” (**Cm6N**). From the view of pose-guided person generation, reviewer **1dhK** remarks that our work “ *addressing important user scenarios that were previously overlooked* ” and “ *idemonstrating the effectiveness of the proposed framework across multiple datasets.* ”. Reviewer **QBtt** mentioned that our work “*address different task scenarios is of significant value.* ” Then, reviewer **rV96** agrees that “ *t is very insightful and inspiring.* ”. Finally, Reviewer **QBtt** mentioned that “ *the proposed task scenarios are highly insightful, with a clear and well-motivated approach.“ * We further thank the reviewers for their constructive feedback. We have uploaded a PDF file which includes figures to address the reviewers’ feedback. **Next, we discuss two common comments raised by the reviewers.** **Q1: Results on Out-of-Domain Datasets.** (suggested by reviewer **1dhK** and reviewer **rV96**) **Response:** We greatly appreciate the reviewer's insightful comments. We have demonstrated the visualization results of IMAGPose on out-of-domain data. We randomly selected some poses and characters from out-of-domain sources, with detailed results provided in PDF format. The outcomes show that our IMAGPose continues to deliver excellent results, producing high-quality and high-fidelity images. **Q2: Differences with technologies like Animate Anyone.** (suggested by reviewer **QBtt**, reviewer **rV96**, and reviewer **Cm6N**) **Response:** We sincerely appreciate this question and look forward to discussing it further. Although Animate Anyone also injects features from an image encoder and VAE, our proposed IMAGPose differs in several key aspects: **(1) Implementation method.** Animate Anyone copies weights from the main UNet and maintains the same network structure, sharing latent spaces with the main UNet to facilitate feature interaction. In contrast, IMAGPose uses the same UNet to process both source and target images, resulting in a more harmonious and unified approach. **(2) Training parameter.** Animate Anyone introduces an additional Reference UNet, nearly doubling the parameter volume compared to IMAGPose, significantly increasing training complexity. **(3) Task objectives.** Animate Anyone supports only single reference image-guided image generation, while our IMAGPose unifies three common task types through designed FLC, ILC, and CVA, and supports image generation from multiple source images. For other questions raised by the reviewers, please see our response (with text and other new experiments) to individual questions below each review. We will incorporate all our responses and additional results in the final version of the manuscript. **Finally, we deeply appreciate the reviewer's detailed comments and thank them for helping us improve our work. We value the reviewer's insights and are very open to further discussions during the rebuttal period or afterward to explore this direction more. If the reviewer has any additional questions, please let us know. We are committed to being responsive until the end of this rebuttal period.** Pdf: /pdf/e1ee6c6c122e55cc607a205f278d02dcaf873f2f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
Accept (poster)
Summary: This paper demonstrates theoretical separations in the representational abilities of Transformer and Recurrent Architectures on selected synthetic tasks, including index lookup, nearest neighbor, recognizing bounded Dyck languages, and string equality. The class of recurrent architectures examined includes popular networks such as RNNs, State Space Models, and Mamba. The results reveal significant differences in the required model sizes for each architecture to effectively represent these tasks. Additionally, the authors present some experiments that highlight the optimization efficiencies of architectures belonging to both classes on the selected tasks on practical-size sequences. Strengths: 1) This is a theoretically solid paper. The proof techniques are novel and well explained, and they might be useful for further analysis of modern deep learning architectures. 2) The architectures considered are of practical interest, and the analysis contributes to a deeper understanding of the power and limitations of these architectures. 3) The paper extends previously known separation results to a set of diverse and more real-world applicable tasks. Weaknesses: 1) The experimental part is fairly brief. The analysis of computational or statistical learning complexity is left to future work. Technical Quality: 3 Clarity: 4 Questions for Authors: - Do you have any guess of which Boolean functions are conjectured to be hard to compute for 2-layers Tranformers of poly-logarithmic size? - It seems your separation results are worst-case over the input space. Can you argue whether all/some separations still hold for typical inputs, if for instance a uniform distribution over the input space is assumed? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations and societal impact are well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and time. Responses to the individual comments below. > “The experimental part is fairly brief. The analysis of computational or statistical learning complexity is left to future work.” We respectfully disagree with this assessment. The central claims of the paper are theoretical in nature and we believe the empirical results are sufficiently extensive to support them and provide some context for the theoretical results. Our work does include experiments for all primary tasks considered in the paper – apart from those related to associative recall, where extensive empirical evidence has already been provided in earlier works (see citations in the paper). Additionally, we consider five different recurrent architectures spanning traditional architectures such as LSTMs to more recently proposed ones such as Mamba, DSS, etc. Given the kind of statements made in the theoretical results, it seems natural to explore with experiments whether differences of a similar nature are exhibited in practice. We believe we have studied that fairly extensively. While it is true that experiments analyzing the statistical learning complexity of models are not present, we believe that it is far and somewhat detached from the central claims of the paper. > “Do you have any guess of which Boolean functions are conjectured to be hard to compute for 2-layers Tranformers of poly-logarithmic size?” We do not have any strong conjectures at the moment, but there are a couple of directions worth thinking about. - In our paper, we show that 2-layer Transformers with poly-log size can represent k-DNFs/k-CNFs with at most N terms. We suspect that for k-DNFs/k-CNFs with cubic ($\Omega(N^3)$) or more terms may be difficult for two-layer Transformers to represent with poly-log size. - Due to circuit complexity bounds, functions outside of the complexity class TC0 are impossible for transformers even of polynomial size, and thus in particular for 2-layer transformers of poly-logarithmic size. This is likely to include foundational problems such as the evaluation of Boolean formulas. However, it is unclear if such tasks can represented by small-sized RNNs/SSMs. > “It seems your separation results are worst-case over the input space. Can you argue whether all/some separations still hold for typical inputs, if for instance a uniform distribution over the input space is assumed?” Thanks for raising this question. We have some thoughts on this. Currently, most of the results in our paper are of a deterministic nature in the sense that a model is said to express a function $f$ if it can produce the same output as $f$ for all inputs in the domain of $f$. This is more typical when analyzing expressivity in the context of classification, i.e., $f: X \rightarrow \{0, 1\}$. Upon some thinking over the past week, we believe that some of our lower bounds can indeed be extended to the average case on certain distributions, such as the uniform one. For instance, there is a well-studied Boolean function called Inner-product mod 2 (IP2 in short), for which we can show that any RNN or 1-layer Transformer must have linear size to achieve low error with respect to the uniform distribution. The proof follows from the arguments in Section C.2 in Vardi et. al [1]. We think that some of the lower bounds in our paper, such as Index Lookup and nearest neighbor/MQAR, can be extended to various distributions based on reductions from the IP2 problem. We will verify the proofs and include them in the next version of our paper. [1] Vardi et. al. Size and Depth Separation in Approximating Benign Functions with Neural Networks. COLT 2021. --- Rebuttal Comment 1.1: Title: Thank you Comment: I appreciate the authors' thoughtful response, and I will be raising my score accordingly.
Summary: This works studies the differences between Transformers and recurrent models with respect to 4 tasks: index lookup, associative recall, string equality and Bounded Dyck languages. The authors prove that for index lookup and nearest neighbor recall, there exists a 1-layer Transformer that needs poly-logarithmic width in the number of inputs while recurrent architectures require at least linear width. On the other hand, 1-layer Transformers require at least linear width for checking string equality and Bounded-Dyck languages. The authors strengthen their claims with experiments on synthetic datasets. Strengths: I found the following techniques were interesting: (1) the usage of Johnson-Lindenstrauss lemma to construct solutions with poly-logarithmic width in the number of inputs. (2) Using results from communication complexity to prove results for the lower-bound for the model width. While the ideas relating relating to communication complexity have been used in Sanford et al. (which is acknowledged by the authors), I nevertheless found the presentation of the lower-bound results to be interesting. I also found the writing to be refreshing; the authors are careful when interpreting the results and do not overstate the importance of their results while being precise in their claims. Weaknesses: Overall, I worry that the contributions of the work don't make progress towards the motivating questions in the introduction: why Transformers have supplanted LSTMs (L29) in many problems. The analysis is specific to synthetic tasks with 1-layer recurrent models and Transformers (https://arxiv.org/abs/2105.11115, https://arxiv.org/abs/2208.01066). The theorems comment on the width of 1-layer Transformers but they do not seem to have any relevance to Transformers with multiple layers or when trained on other tasks. With the goal of differentiating Transformers from LSTMs, I wonder why these particular tasks were chosen and what makes them important. It would help to motivate why these tasks are important and for example, explains why Transformers are better for language modelling. The different tasks are often motivated for specific contexts such as in-context learning (associative recall) or natural language itself (Dyck languages - https://arxiv.org/abs/2105.11115). Furthermore, the results also suggest that scaling up both Transformers and recurrent networks suffices for tackling any of the tasks. Another concern is that the results are primarily for 1/2-layer Transformers. However, it is more relevant to study Transformers with multiple layers and it is unclear how to extend these results or if they have any implications for deeper Transformers. For example, we know that Dyck-languages can be represented by deeper Transformers. Technical Quality: 4 Clarity: 3 Questions for Authors: To reiterate the points raised above 1. How do we extend these results to deeper networks? 2. Are the results relevant for other task or other domains and is the relevance of these results very narrow? 3. Is there a way to motivate the selection of the tasks considered in this work? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors sufficiently acknowledge limitations. In particular they discuss that existence proofs do not guarantee such a solution is always found using gradient descent. The authors also comment on the limitations of their results to 1-layer Transformers and how they do not extend to 2-layer Transformers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and time. > “The analysis is specific to synthetic tasks with 1-layer recurrent models and Transformers …” > “How do we extend these results to deeper networks?” We wish to clarify what may be a misunderstanding here – we should have been more explicit about this when we defined recurrent models on pg. 3. While our lower bounds for Transformers apply only to 1-layer models, our results for recurrent models apply to any depth. The way we have defined RNNs (Section 2), the transition function is allowed to be an arbitrary function which is very general and includes multi-layer versions of all architectures used in practice. When we state that an RNN needs a hidden state of size $\Omega(N)$ to solve a task, what we mean is that for any RNN with L layers, the sum of the size of hidden states of all layers has to be $\Omega(N)$ and thus the size of the model is $\Omega(N)$. Hence, our results show some tasks that can be solved with 1 or 2-layer Transformers (e.g. Index Lookup, Equality, Nearest Neighbour, etc) with poly-log size whereas any **multi-layer RNN** must have at least **linear size** (exponentially larger) to solve them. We will emphasize this point more clearly in Section 2 to avoid this confusion. We agree with the reviewer that understanding the limitations of multi-layer Transformers (TF) is very important. However, proving lower bounds even for 1-layer TF has been challenging—there have been some very recent results [4] and most questions about (multi-layer) TF with sub-linear width remain open. Our lower bound results for 1-layer TF are more general and more interestingly apply to tasks solvable by small-sized RNNs or small-sized 2-layer TF, thus showing a clear separation. Existing lower bounds for multi-layer TF are not solely information-theoretic and rely on unproven conjectures, e.g. from circuit complexity. The situation mirrors somewhat that in computational complexity, where proving circuit lower bounds even for depth 2 or 3 has proved challenging for decades (https://shorturl.at/6lwUh). Even for 1-layer Transformers (TF) we believe there are many interesting unanswered questions, some of which are answered in our work. (Q1) Are there tasks that cannot be solved by small 1-layer TFs but are solvable by much smaller 2-layer TF? (Q2) Since 1-layer TFs can solve tasks like Index lookup with poly-log size, while any multi-layer RNN needs linear size, are there tasks that small RNNs can solve but not 1-layer TFs? (1) We show an exponential depth separation for tasks like Equality, Dyck, and Disjointness, where 1-layer TFs need linear width but 2-layer TFs solve them with log width. Such depth separation questions are widely studied for feedforward networks (see [1, 2, 3]). As far as we know, our results are the first to show exponential depth separation between 1-layer and 2-layer TFs on multiple tasks. (2) Since 1-layer TFs of poly-log size can express tasks that multi-layer RNNs with sublinear size cannot, the lower bound on Dyck shows a task solvable by constant-sized RNNs but not by 1-layer TFs. Regarding upper bounds or constructions for 1 or 2-layer TF, we see it as a strength rather than a limitation. We show multiple tasks that 1 or 2-layer TF can represent, whereas any multi-layer RNNs must be much larger regardless of depth. [1] The Power of Depth for Feedforward Neural Networks. [2] Depth Separation for Neural Networks. [3] Representation benefits of deep feedforward networks. [4] Representational strengths and limitations of transformers. > “With the goal of differentiating .. why these particular tasks were chosen and what makes them important … for language modelling.” > “Is there a way to motivate the selection of the tasks in this work?” We discussed the task choices in the Introduction and further for Dycks in Section F. Briefly, our goal was to focus on tasks that appear natural or have strong justification in ML or linguistics research. For instance, in the MQAR task introduced in [1], they observed a perplexity gap in natural language modeling between Transformers (TF) and non-TF LMs, finding that non-TF language models struggled with texts that precisely mimic the MQAR task. Similarly, for the general version, i.e., the nearest neighbor task, multiple works [2, 3] found TF-based LLMs can mimic such behavior in practice. Dyck languages have been extensively studied in formal language theory and ML research. Tasks like String Equality and Index Lookup are natural tasks that are likely to be relevant as primitives in search and learning tasks. For example, in code execution, given `arr = [6, 2, 8, 2, …, 3]` followed by `print(arr[i])`, the task is essentially Index Lookup. Models might also perform such lookups when provided with line numbers for debugging. [1] arxiv.org/abs/2312.04927 [2] arxiv.org/abs/2310.03016 [3] arxiv.org/abs/2404.11018 > “The analysis is specific to synthetic tasks…” > “Are the results relevant for other task or other domains and is the relevance of these results very narrow?” In the main paper, we discuss upper bounds for Transformers for specific tasks relevant to practice. In Section F, we show Transformers can represent a broader class of Boolean functions which include subclasses like k-CNFs/k-DNFs. Our lower bounds for multi-layer RNNs and 1-layer Transformers apply to any function with communication complexity ~ N, including most Boolean functions. Since the work aims to understand the problems theoretically, the problems must be mathematically well-defined and hence synthetic by definition. We're unsure what is meant by "other domains," but as these models are applied to more domains beyond NLP, like formal mathematics or algorithm design, understanding their capabilities/limitations along the lines we have considered in our paper would be imperative. We hope the techniques/framework presented in our work will be useful for such analyses. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the detailed response, particularly on the challenges of extending these results to multi-layer Transformers and for clarifying that the setup includes multi-layer RNNs. Most of my concerns were addressed and I am hence raising my score. --- Rebuttal Comment 1.2: Title: Potential useful reference Comment: I only followed the above discussion cursorily and there is a recent work that tries to provde lower bounds on depth for the transformer as a function of the Markovian order of the input process. Sharing it here in case it's of interest/relevance to the authors/reviewers: https://arxiv.org/abs/2407.17686v1 --- Reply to Comment 1.2.1: Comment: Thanks for the pointer! Upon a quick look, it seems they also show communication complexity-based lower bounds for 1-layer Transformers and their lower bounds for multi-layer TF are based on some assumptions on the attention patterns. We will take a closer look and consider citing it appropriately in the next version.
Summary: This paper analyzes the differences in terms of representations between Transformers and recurrent architectures. They highlight multiple cases: a) a setting where 1-layer Transformer can represent the task with a log number of parameters but not RNNs (index lookups) b) a case where RNNs can represent the task with a log number of parameters but not a 1-layer Transformer (Dyck language) c) a case where both 1-layer Transformer and RNNs cannot represent the task (boolean tasks like string equality). Then finally show that a 2-layer Transformer can represent the string equality task and other associative recall tasks such as nearest neighbors. They end up with some experiments to validate their theorems: they show that 1-layer transformers are great at learning the index lookup task and that RNNs learn very quickly the bounded dyck languages. Strengths: I think that the paper is of a great value to the community. There have been many papers in the community that the study the cases where Transformers are superior over RNNs [1] and where RNNs are superior to Transformers [2]. It is great that this paper gives a unified picture of the strengths and weaknesses of both models. Besides, their analysis is clean: the theorems and proof sketch are very easy to follow. For this reason, I advocate for acceptance for this paper. [1] Jelassi, S., Brandfonbrener, D., Kakade, S. M., & Malach, E. (2024). Repeat after me: Transformers are better than state space models at copying. arXiv preprint arXiv:2402.01032. [2] Liu, B., Ash, J., Goel, S., Krishnamurthy, A., & Zhang, C. (2024). Exposing attention glitches with flip-flop language modeling. Advances in Neural Information Processing Systems, 36. Weaknesses: There are not a lot of flaws for this paper in my opinion. I will just give some suggestions: - I believe that the authors could further improve the presentation of their results. In particular, it is not clear at the beginning why they choose the tasks they propose and it sounds a bit like a "catalog". I think the authors should clearly say that they consider a case where a 1-layer transformer with log(L) params fail but not RNN, another case where RNN with log(L) params fail but not a 1-layer etc. Maybe it may be worth adding a table where the column headers are the tasks and the row headers are the two models and each entry contains the complexity of each model at each task. - Besides, I liked a lot the discussion from lines 210 to 218 and I believe that this should be earlier in the paper (even in the introduction). I think this point is central to understand one difference between Transformers and RNNs. I think that the point raised by the authors is not totally novel since a similar behavior has been reported by [1] in the case of the copy task. - Regarding the superiority of RNNs over Transformers in the case of the bounded Dyck languages, do the authors have a similar discussion to add? If yes, could they add it? Is the explanation similar to the one advanced by [2]? - Lastly, I don't know if it is possible to give any intuition about the limitation of 1-layer Transformers and RNNs at representing boolean functions? - I am not sure to follow the experiments for the bounded Dyck languages. The authors say that the 1-layer Transformer achieve near perfect accuracy up to lengths 100 but the red curve in figure 2 right is around 60% for all the iterations. Did I miss something? [1] Jelassi, S., Brandfonbrener, D., Kakade, S. M., & Malach, E. (2024). Repeat after me: Transformers are better than state space models at copying. arXiv preprint arXiv:2402.01032. [2] Liu, B., Ash, J., Goel, S., Krishnamurthy, A., & Zhang, C. (2024). Exposing attention glitches with flip-flop language modeling. Advances in Neural Information Processing Systems, 36. Technical Quality: 4 Clarity: 3 Questions for Authors: I mentioned my questions in the weaknesses section. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors mention the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and time. Responses to the individual comments below. > “I believe that the authors could further improve the presentation of their results. In particular, it is not clear at the beginning why they choose the tasks they propose and it sounds a bit like a "catalog". I think the authors should clearly say that they consider a case where a 1-layer transformer with log(L) params fail but not RNN, another case where RNN with log(L) params fail but not a 1-layer etc. Maybe it may be worth adding a table where the column headers are the tasks and the row headers are the two models and each entry contains the complexity of each model at each task.” Thank you for the suggestion. We agree that categorizing with respect to the separations (log L vs L) could be more helpful. We considered adding a table/figure at the top of the second page. We weren’t sure about the best way to depict the results and due to lack of space, we decided to go without it. We will consider adding these in the next version. > “Besides, I liked a lot the discussion from lines 210 to 218 and I believe that this should be earlier in the paper (even in the introduction). I think this point is central to understand one difference between Transformers and RNNs. I think that the point raised by the authors is not totally novel since a similar behavior has been reported by [1] in the case of the copy task.” Thanks for the suggestion and we will consider adding it to the introduction in the next version. Regarding your point about [1], we do not claim novelty about the intuition. We believe other researchers in the community may have a similar intuition behind the differences in the two architectures. In our work, that intuition is formalized using communication complexity for lower bounds and the JL vectors for upper bounds. In their work [1], they use a different approach to derive a lower bound for the copying task. However, it is worth noting that using communication complexity-based techniques as used in our work, it is also possible to show lower bounds for RNNs to perform the copying task. We will consider adding that somewhere in the appendix for readers who might find it interesting. > “Regarding the superiority of RNNs over Transformers in the case of the bounded Dyck languages, do the authors have a similar discussion to add? If yes, could they add it? Is the explanation similar to the one advanced by [2]?” We have some discussion on this at the beginning of Section F, which you might find helpful. However, the discussion is intertwined with the proof technique, so it might be less straightforward than the former discussion. The general intuition is that bounded Dycks can be represented by constant-sized DFAs, implying that RNNs require very little (constant) memory while processing them sequentially. In contrast, a 1-layer Transformer must compute the same based on a convex combination of all input symbols (attention) and classify 0 or 1 based on this vector (using MLP). The weights of the convex combination are determined by dot products of query and key projections, and hence cannot be arbitrary. In simpler terms, the proof intuitively shows that if a 1-layer Transformer can recognize bounded Dyck languages, then the query and value vectors must contain sufficient information, $\Omega(N)$ in bits. Think of it this way: the width of the query/key vectors determines the flexibility of the attention weights, and the width of the value vectors or input embeddings determines the amount of information stored in the output of the attention block. The proof essentially shows that both must be sufficiently wide (contain enough bits of information) for any arbitrary MLP block to be able to classify correctly based on the attention block's output. > “Lastly, I don't know if it is possible to give any intuition about the limitation of 1-layer Transformers and RNNs at representing boolean functions?” For RNNs, even for Boolean functions, the key intuition is the same as the one explained for the Index Lookup task. The goal of introducing that task was to highlight that intuition in a natural manner since it may be less obvious in the context of Boolean functions. For 1-layer Transformers, the intuition is the same as the one described above. > “I am not sure to follow the experiments for the bounded Dyck languages. The authors say that the 1-layer Transformer achieve near perfect accuracy up to lengths 100 but the red curve in figure 2 right is around 60% for all the iterations. Did I miss something?” For bounded Dyck languages, we find that 1-layer Transformers struggle at lengths 200 and above, unlike RNNs and 2-layer Transformers. Figure 2 right depicts the validation curve for models at length 400 (and not 100). The goal of that line about lengths 100 is to clarify that this difference is performance occurs at lengths above 100 or so which is natural since the difficulty of the tasks increases with length. For the Index Lookup task, all models solve the task at length 20 and the difference is more stark at length 100 and above (see Figure 2 left). --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply. They properly answered to all my questions. I maintain my score.
Summary: In this paper, the authors study the representational separation results about two widely used classes of language models: transformers and RNNs. For a set of practically well-motivated tasks, they establish lower and upper bounds for attention based one-layer (and some for two) transformers and arbitrary RNNs using and building upon techniques from communication complexity. They also empirically validate their conclusions. Strengths: First of all, I want to appreciate the authors for clearly elucidating the main ideas and the intuitions in a very accessible manner even for non-experts. Such style of writing is a rarity these days and the authors deserve the credit for this. Overall, it's a very important topic and problem of research interest to understand the differences between the two widely used models in the form of transformers and RNNs. They present a set of intereseting results for well-motivated tasks, and establish theoretical results for transformers needing to be of linear size in the input dimension, whereas RNNs could get away with logarithmic size, and vice versa. Weaknesses: While I appreciate the technical results of the paper, I am not fully sure about how meaningful these results are inorder to decipher the fundamental differences between RNNs and transformers. In particular, for the index lookup task, it makes sense that the only way RNNs can retrieve the symbol $s_p$ at an unknown location $p$ revealed at the end is only via storing all the past information in its hidden state and hence $m \geq N/p$ is logical. However, if you feed the sequence in the order $p, s_1, \ldots, s_N$, would the same result hold? I believe you could get away with $\log N$ here too though I could be wrong. Shedding light on things like this could yield more insights about how these architectures are fundamentally different? Because for an index lookup task, if all we care about is retrieving a symbol at some position, we don't really care about which order you feed the input sequence right? Also on a minor note, as the authors themselves acknowledged, these results might not have full bearing on the learnability settings. It would be interesting to see how these results hold for SGD learnability on the same tasks. Food for future thought. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. I realized that the soft-max attention in line 121 is non-causal. Would your results change if it's causal attention? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and time. Responses to the two points raised in your review below. > “In particular, for the index lookup task, it makes sense that the only way RNNs can retrieve the symbol $s_p$ at an unknown location revealed at the end is only via storing all the past information in its hidden state and hence $m\geq N/p$ is logical. However, if you feed the sequence in the order $p, s_1,\ldots, s_N$, would the same result hold? I believe you could get away with $\log N$ here too though I could be wrong. Shedding light on things like this could yield more insights about how these architectures are fundamentally different? Because for an index lookup task, if all we care about is retrieving a symbol at some position, we don't really care about which order you feed the input sequence right?” While what you have mentioned is technically correct, that approach is specific to the particular version of the Index Lookup task and it does not overcome the fundamental issues with how RNNs process inputs. To be clear, for the case where the models are provided sequences in the order $p, s_1,\ldots, s_N$, RNNs can indeed get away with a hidden state of size $\log N$. However, consider the following variant of the Index Lookup task (let’s call it Multi-Index Lookup) where the models are provided with $k =O(N)$ (e.g. N/2, N/4, etc) indices for which they have to look up and produce the respective symbols. In particular, the input is $s_1, \ldots, s_N, p_1, p_2, \ldots, p_{k}$. For each $p_i$ after the symbols $s_1, \ldots, s_N$, the model is required to output the respective symbol at position $p_i$. Note that for one-layer Transformers, the construction for the Index Lookup task can be directly extended for this Multi-Index Lookup task implying that poly-log size Transformers can solve this. However, for RNNs, they must have a hidden state of size $\Omega(N)$ to solve this correctly even if the positions are prepended to the symbols. Instead of a reduction from the $\mathrm{Index}$ problem, we can show via a simple reduction from the Disjointness problem. In other words, if the input sequence is of the form $p_1, p_2, \ldots, p_{k}, s_1, \ldots, s_N, p_1, p_2, \ldots, p_{k}$ and an RNN can produce the required outputs then two parties Alice and Bob can compute Disjointness using $mp$ bits which implies that $mp$ must be $\Omega(N)$. The argument is quite straightforward. Let’s say Alice and Bob have two bits strings $a$ and $b$ of size $k=O(N)$ and both have access to the same RNN. For simplicity, they have agreed that Alice will place her bits in the first $k$ indices sequentially. Alice can then take an RNN, and provide it with indices $1$ to $k$ as input followed by $k$ symbols corresponding to $a$. The last $N - k$ symbols can be arbitrary. Alice can then send the hidden state to Bob which requires $mp$ bits. Bob can provide the same set of indices $1$ to $k$ to the RNN and then record the output. Bob can then compute Disjointness and hence it follows that $mp= \Omega(N)$. **Summary:** While it is true that for the Index Lookup task prepending the input sequence with the position/indices can make a difference for RNNs, the limitation/lower bound still applies for a natural extension of the task involving multiple indices. For us, the goal of introducing the Index Lookup was to come up with the simplest task to describe our tools. However, based on the point raised by you, we see that it could create confusion, and will make sure to clarify this in the next version of the paper. > “I realized that the soft-max attention in line 121 is non-causal. Would your results change if it's causal attention?” Using causal attention does not affect any of the constructions or lower bounds for Transformers in the main paper, i.e., Index Lookup, Equality, Disjointness, Nearest Neighbor, etc. It only affects the construction of Transformers presented in Section F.3 for a more abstract yet general class of Boolean functions. --- Rebuttal Comment 1.1: Title: Acknowledgement of the rebuttal Comment: I am satisifed with the authors' response which addressed my concern. Happy to raise my score to 7.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful feedback and their time. We are encouraged to see that they found our results interesting (Rev *LT9S, FtyK, BJPt*), well-motivated (Rev *LT9S, BJPt*), and to be of value to the community (Rev *FtyK*). We are further pleased to see that they found our proof techniques to be interesting (Rev *LT9S, BJPt, hDB9*), clean (Rev *FtyK*), and well-explained (*all reviewers*). In this work, we show that various tasks can be solved by small-sized (poly-log size with 1 to 2 layers) Transformers, whereas any multi-layer RNN must be exponentially larger in comparison. Additionally, we show that 1-layer Transformers cannot solve certain tasks with sublinear width, whereas they can be solved by either RNNs or 2-layer Transformers of much smaller size. We have addressed the weaknesses and specific questions from each reviewer in the individual responses. Below, we summarize the key aspects of our responses. Please refer to the individual responses for more details. ------------------------------------------------ Reviewer LT9S mentioned that the Index Lookup task for which we provide a linear lower bound for RNNs can be substantially easier if the inputs are provided in a different way. In the rebuttal, we explain why it does not solve the core issue and show that a natural extension of the task will still be hard for RNNs even if the inputs are provided in a different manner. Reviewer FtyK provided some suggestions and had some questions, which we have answered in the individual response. Reviewer hDB9 mentioned that our analysis is specific to 1-layer recurrent models and Transformers. We believe there may be a misunderstanding. We clarify that our lower bounds for recurrent networks apply to models of any depth, not just 1-layer RNNs. Further, we discuss the implications and relevance of the lower bounds for 1-layer Transformers and other questions raised by the reviewer in the individual response. Reviewer BJPt stated that the experimental part of our paper is brief and does not include experiments to analyze statistical learning complexity. We argue that the experiments are supportive in nature and are sufficiently extensive to provide context for the theoretical results in the paper. We address the specific questions raised by the reviewer in the individual response.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Open-Book Neural Algorithmic Reasoning
Accept (poster)
Summary: This paper proposed an open-book learning framework that allows networks to utilize the entire training dataset during reasoning, significantly enhancing performance on the CLRS Algorithmic Reasoning Benchmark and revealing intrinsic connections between different tasks through an attention-based mechanism. Strengths: Innovation: The use of training datasets to enhance algorithmic reasoning tasks is novel. Workload: Your research workload is significant. The article provides sufficient experimental support. Writing quality: Your paper is well-written, with clear and precise language and a smooth flow of ideas. The structure is reasonable, and the logic is sound, making it very enjoyable to read. Experimental analysis: Your experimental analysis is rigorous, as your experimental design is reasonable and analysis methods are scientifically reliable. Weaknesses: 1. Introducing the additional memory seems to have a large storage overhead if the training set is large. 2. The author's proposal is similar to retrieval augmented generation (RAG) in NLP, so I hope it can be discussed. Technical Quality: 4 Clarity: 3 Questions for Authors: What are the implications if the testing phase is supported by unseen datasets? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations of the study and the possible negative social impact have been well documented by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments. The following addresses the questions and concerns one by one. **W1:** Introducing the additional memory seems to have a large storage overhead if the training set is large. **A:** The proposed open-book framework is flexible enough to adapt to different computational environments. As the number of auxiliary data points decreases, the memory required by the framework also decreases; when the number of auxiliary data points is 0, the memory requirement of the framework is the same as that of the original methods. In our experimental setting, the memory required increased by an average of 30% compared to the original methods. **W2:** The author's proposal is similar to retrieval augmented generation (RAG) in NLP, so I hope it can be discussed. **A:** Thanks for pointing this out. The proposed framework shares some similarities with the technique of retrieval-augmented generation in LLMs. Both approaches involve augmenting the final step of model generation with additional information to aid the output. Although the high-level concept is similar, our open-book framework targets smaller models and opens the black box of these models, resulting in some technical differences compared to LLMs. We'll add a discussion on this point in the final version. **Q:** What are the implications if the testing phase is supported by unseen datasets? **A:** Due to the design of the gate mechanism (see line 19 of Algorithm 1), our method achieves robustness. When supported by unseen data in the testing phase, the framework can adjust the influence of auxiliary data points through the gate function, resulting in robust performance.
Summary: This paper presents open book Neural Algorithmic Reasoning (NAR). The central claim the authors investigate is whether open book reasoning -- allowing a model to query information from its training set relevant to the current query -- can be unified with existing NAR architectures. In doing so, the authors present a general framework for open book NAR. The authors find that their framework is more performative on the CLRS benchmark. Furthermore, the authors investigate open-book NAR with multi-task training and find that it achieves similar performance to the current best multi-task NAR algorithm -- exceeding the baseline in some tasks. Strengths: - Significance: Neural algorithmic reasoning models have been highly effective in real world use cases `[1]` and are extremely relevant to the NeurIPS community. This paper introduces an orthogonal direction of improvement over current work in the field. As such, it seems that this paradigm is applicable to any NAR model that follows the Encode - Process - Decode paradigm. - Clarity: The manuscript is well-structured and easy to read. `[1]`: https://arxiv.org/abs/2108.11482 Weaknesses: __Robustness and Generalization__: While I understand the logic of why open-book reasoning is relevant to NAR, wouldn't allowing the features learned at train time overfit to the training set and hurt out of domain performance? 'm concerned that, as the distribution shift increases, the efficacy of an open-book NAR model will decrease considerably faster than that of a vanilla NAR model. Concretely, I recommend the authors compare with the algorithm and dataset presented in `[3]` (one of the papers cited in the introduction). This experiment has the added advantage of lending credence to the generalization claim in L54-57 because, if open-book learning is more performant, it must be extracting “background knowledge” features invariant to distribution shift. Presently, I'm recommending a __Borderline Acceptance__. Open book NAR seems to be highly desirable in real world scenarios, but I'm concerned that the performance gains come at the cost of OOD performance, which seems integral to the practical benefits of NAR algorithms. I'm willing to change my recommendation based on future discussion with the authors. Minor comments: - L36: CLRS is definitely a great litmus test for NAR algorithms. I recommend the authors look into SALSA-CLRS `[2]` as well, where open-book performance might admit better scalability than current baselines. - L223: This section will benefit from an explanation of what metrics were used for each use case and why. `[2]`: https://arxiv.org/pdf/2309.12253 `[3]`: https://arxiv.org/pdf/2302.10258 ----- Increasing score to __Weak Accept__ after discussions with the authors. Technical Quality: 3 Clarity: 3 Questions for Authors: (addressed in weaknesses section) Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed limitations, though I recommend an explicit limitations section in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. The following addresses the questions and concerns one by one. **W1:** Concerns about the OOD performance. **A:** The results presented in the paper are exactly the out-of-distribution performances of the proposed framework. As pointed out in Line 42 of the paper, the test instances are substantially larger in scale compared to those in the training set. For each dataset in the CLRS benchmark, the training set graphs contain approximately 12 nodes on average, while the test set graphs contain 64 nodes and are specifically generated to evaluate the model's OOD performance (as claimed by the CLRS creators). To further validate our framework's OOD performance, we include two additional experiments in the PDF attached to the global response. The overall design principle is consistent with that of the SALSA-CLRS dataset. In Figure 2, we fix the training set and increase the test set's graph size; in Figure 3, we fix the test set and vary the training set's graph size. We observe that even when the ratio of test to training graph sizes grows exponentially, the framework's performance across different datasets changes in a relatively smooth manner. **W2:** This section will benefit from an explanation of what metrics were used for each use case and why. **A:** Thanks for this comment. In the paper, we follow the standard evaluation metric used in the CLRS benchmark to ensure a fair comparison with previous work. The metric is called the F1 score, which involves dividing the algorithm's output into different types (such as node states, edge states, etc.), calculating the percentage of output elements that match the ground truth for each type, and then taking the average. We’ll add this detail in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors have addressed my main concern with the paper. As such, I've increased the score to Weak Accept.
Summary: The paper proposes a method to use the training dataset more explicitly during test time inference to improve performance. This is done with a dataset encoder module plus another processor module named open book processor. The authors validate their method in the single and multi-task set-up with good results. Ablations are performed in the multi-task set up to determine which algorithms are most helpful. Strengths: - The paper is clear and easy to understand. - The idea is interesting and well-executed. - The evaluation is good and convincing (thank you for the error bars). Weaknesses: - The related work is very brief, arguably too brief. For instance [1,2,3] are missing. - There is no ablation for the architectural design decisions. - The description of hyper-parameters should be in the Appendix of the paper rather than the reader needing to open a different paper to find them. This would help this paper stand on it's own. - It would be good to re-iterate the metric (f1 score I believe) and training graph vs test graph size. - Only the final performance is measured on the larger graphs. Another important and interesting metric how well the actual algorithm is learned. For a given task, e.g. single-source shortest path, there are several algorithms that solve this problem (e.g. Dijkstra and Bellman-Ford). Not all algorithms are equally easy to learn (e.g. BellmanFord is much more nicely aligned to the GNN architecture than Dijkstra and thus easier). To what extent is the correct algorithm learned, this can be shown by the accuracy on various algorithm specific intermediate hints, e.g. next node selected in Dijkstra. I suspect that for instance in the multi-task set-up it happens that the "wrong" algorithm is learned (see Table 2 Dijkstra relying on BellmanFord). I think this matters because this is a kind of short-cut that the model may learn contrary to what we desire (defined by the training data + loss). [1] https://proceedings.neurips.cc/paper_files/paper/2023/file/a2370db7c99791ad5d9f3ef48ad6d464-Paper-Conference.pdf [2] https://proceedings.neurips.cc/paper_files/paper/2021/file/a2802cade04644083dcde1c8c483ed9a-Paper.pdf [3] https://arxiv.org/pdf/2406.09308 I am willing to raise my score further if the above weaknesses and questions are addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: - How does the network perform as you scale the available training data points (as far as I can tell only 240 examples are tested)? - What does the generalisation curve look like across graph sizes (64,128,256,...)? - How many attention heads are you using for the cross-attention? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Few ablations (see questions) - Related work is too brief (see weaknesses) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. This paper explores a new paradigm in neural algorithmic reasoning (NAR). We propose an open-book framework and provide a concrete implementation to demonstrate that open-book information does have the potential to enhance neural network reasoning capabilities. What we believe is that this work will influence future directions in NAR, as the open-book framework, combined with many other fancy techniques, will likely lead to more powerful neural architectures, and our implementation is just the beginning. The following addresses the questions and concerns one by one. **Q1:** How does the network perform as you scale the available training data points (as far as I can tell only 240 examples are tested)? **A:** In our experiments, we set the number of auxiliary data points to 240 (= 30*8), where 30 represents the number of algorithms (tasks) and 8 represents the number of task categories. This ensures that both the single-task and multi-task settings have the same number of auxiliary data points, and can facilitate future research on the impact of open-book information across different categories in the multi-task setting. Additional results are provided in the PDF attached to the global response. The performance of our method remains robust as the number of auxiliary data points varies; note that when the number of auxiliary data points is zero, our method becomes the existing model. **Q2:** What does the generalisation curve look like across graph sizes (64,128,256,...)? **A:** This is a very constructive question and provides new insight into further testing the reasoning capability. In the original test data provided in CLRS, the graph size is set to 64. We construct larger testing datasets and present our performance curve when the test graph size grows exponentially from 64 to 512 in Figure 2 of the attached PDF (in the global response). We observe that the curves for different algorithmic tasks vary, likely due to the complexity of the solution space for each task. Overall, as the testing size grows exponentially, the performance in most tasks declines smoothly. Another related experiment is shown in Figure 3, where we fix the testing size and vary the training size. We conclude that the average performance of our method remains robust as the testing/training ratio increases exponentially. **Q3:** How many attention heads are you using for the cross-attention? **A:** The results presented in the paper use only one attention head. We've tried other settings, but the results were almost the same. We'll add a short discussion about this in the final version. **W1:** Concerns about learning "wrong" algorithms. **A:** For the CLRS models, the training loss is defined as the average loss of outputs and all intermediate hints (If the hint is categorical, we calculate the cross-entropy loss; if it's scalar, we use the MSE loss.) Thus, the network is always trained to mimic each step of the algorithm rather than focusing solely on the final score. This training approach helps prevent shortcut learning. In Figure 4 of the attached PDF, we show the loss during the training of the Dijkstra algorithm. Furthermore, in Figure 5, we present the loss trajectory on the validation set, demonstrating that open-book information indeed enhances the step-by-step imitation process. **W2:** It would be good to re-iterate the metric (f1 score I believe) and training graph vs test graph size. **A:** You're right. The evaluation metric is the F1 score, where we first partition the algorithm's output into different types (such as node states, edge states, etc.), calculate the percentage of output elements that match the ground truth for each type, and then take the average. The maximum graph size in the training set is 16, while in the testing data, the graph size is 64. **W3:** Other weaknesses. **A:** Thank you for these valuable comments. We will add the missing reference as well as more training details, and incorporate additional ablation studies in the final version.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for the helpful and constructive comments. We greatly appreciate the time and effort you put into this review, and we will incorporate your suggestions into the final version. The paper explores a new paradigm in neural algorithmic reasoning. We propose an open-book framework and demonstrate that open-book information can indeed enhance the reasoning capabilities of existing architectures. We believe that there is significant potential for further exploration within the open-book framework, and that this work could influence future directions in NAR. To address some concerns raised by reviewers, we provide additional experimental results in the attached PDF file. Due to constraints, we only show five figures involving representative algorithmic tasks as the remaining tasks exhibit similar trends. We will add these additional results to the paper’s full version. Pdf: /pdf/eab4a9101739a74b24c90a9fd1af6fce953afeb7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Active Perception for Grasp Detection via Neural Graspness Field
Accept (poster)
Summary: This paper proposes an next-best-view planning method for grasp perception. The authors use neural field to model the grasp distribution of a scene, which is learned from the graspness detection result of different views. Then the NBV is designed as the view with the largest inconsistency between the rendered graspness from the neural field and pseudo-graspness label predicted by the network on the rendered depth. Simulation and real-world experiements show that the proposed method is slightly bettern than exising methods. Strengths: The idea of modelling the graspness of the scene as a neural field is novel to my knowledge. The paper is well structured and the visualization is good for readers. The method and the experiment detials are well presented and most parts are easy to understand and reproduce. Weaknesses: First of all, the meaning of NBV for grasp percetpion is not very clear to me. The goal of grasp percetpion is to find ONE successful and executable grasp in the scene, rather than detecing all the feasible grasps. Secondly, the gain of the proposed method is not well validated by the experiments. In both simulation and real-world experiments, the proposed method is only slightly better than baselines. The margin is not significant. Thirdly, the grasp network that is used to predict the graspness for neural field may produce inconsistent results across views. How do the authors address this problem during field optimization. The depth of the field for unseen views can also be errorneous. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Sec 3.4 is not clear. Why do the authors need FPS? There is no ablation study on this design. Is the graspness score generated by the network or neural field? 2. Colorbar. should be added in Fig3, 8 3. How is the deviation between NGF and predicted graspness computed? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, limitations are addressed in the paper: the time cost and the method cannot be used for dynamic scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer DbvE** Thanks for your valuable feedback. We understand your concerns about the goal of active grasp detection, the performance gain of our method and some other details. We address your concerns below: **Q1: Meaning of NBV for grasp perception.** While finding one feasible grasp pose allows a robot to pick up objects, grasp detection often serves as a part of semantically diverse downstream manipulation tasks [1, 2]. In such cases, it is crucial to provide diverse feasible grasp poses in the scene. For instance, grasping the cap area of a water bottle does not enable the pouring task. Similarly, when tidying up a table, the robot must select the grasping pose based on the object's placement pose. Therefore, it is reasonable for active perception to set the goal of grasp detection as finding all feasible grasps in the scene. **Q2: Performance gain of our method.** To evaluate the performance for active perception, we consider the grasp detection results with different perception steps, where our method totally show significantly superior results compared to other active perception method given same steps. As shown in Figure 4 of our original paper, our method consistently outperforms other NBV strategies by at least 1 AP starting from the 6th step across the seen, similar, and novel sets. In the ablation study of neural graspness sampling illustrated in Figure 5 of our original paper, the incorporation of this sampling method demonstrates significant improvements across almost all planning steps. Furthermore, Table 1 of our original paper presents the results of different active grasp detection methods under a 10-step planning scenario. Our method shows notable improvements compared to the previous state-of-the-art ACE-NBV, with increases of 8.38, 6.71, and 2.98 AP on the seen, similar, and novel sets, respectively. In the context of the Graspnet-1billion benchmark, these improvements can be considered substantial. **Q3: Problem about the process of NGF optimization.** In the optimization process of NGF, there indeed exists inconsistent graspness results for each observed view, which can be primarily controlled through two aspects: (1) During the optimization process, both the new and previously observed views are jointly involved. This approach fully utilizes multi-view information and promotes consistency in graspness across different views given the same spatial location. (2) According to previous research [3], neural representations inherently possess the capability to reconstruct relatively smooth distributions from sparse and noisy multi-view information. As for the depth error in the unseen views, becuase the camera moves continuously in small distance for each step, it is relative small in most cases and won't influence the calculation of the information gain. Figure1 in the rebuttal PDF also illustrate the effectiveness of using the rendered depth for the calculation of grasp information gain. **Q4: Explaination about Section3.4.** We apologize for any confusion. In Section 3.4, we introduce neural graspness sampling, which directly queries the graspness from NGF rather than predicting it from a grasp detection network. Following previous work [4], we incorporate FPS (Farthest Point Sampling) for two reasons: (1) To ensure that grasp positions are distributed throughout the entire scene. (2)To control the number of generated grasp poses, as there can be numerous positions with high graspness values. The graspness scores are generated by the neural field, and we generate grasp poses at positions where the graspness score exceeds a threshold. **Q5: Missing of the colorbar.** Thanks for your suggestion. In Figure 3 and Figure 8 of the original paper, blue means low graspness value while yellow represents high value. We will add it in the next version. **Q6: Deviation between NGF and predicted graspness.** During the training of NGF, the rendering process of graspness is similar to the color. We sample rays $r \in R$ from the given view and use L2 loss to calculate the deviation, which can be formulated as: $$ L_g = \frac{1}{|R|}\sum_{r\in R}(\hat{g}(r)-g(r))^2 $$ where $\hat{g}(r)$ is the graspness rendered by NGF and $g(r)$ is the predicted graspness. During NBV planning, we employ Equation (6) in the original paper to calculate the deviation as the information gain. **References** [1] Learning task-oriented grasping for tool manipulation from simulated self-supervision, Fang et al., IJRR2020 [2] Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping, Rashid et al., CoRL 2023 [3] In-Place Scene Labelling and Understanding with Implicit Scene Representation, Zhi et al., ICCV2021 [4] Graspness discovery in clutters for fast and accurate grasp detection, Wang et al., ICCV2021 --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of my concerns are addressed, and I raise my rate to `borderline accept' --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for considering our response and are pleased that it has successfully addressed the concerns.
Summary: This paper studies active perception for robotic grasp detection. It proposes an active grasp detection framework based on the Neural Graspness Field (NGF), which incrementally models the scene and facilitates next-best-view planning. For next-best-view planning, it aims to reduce the uncertainty of the NGF through a graspness inconsistency-guided policy, selecting views based on discrepancies between NGF outputs and a pre-trained graspness network. Additionally, it presents a neural graspness sampling method that decodes graspness values from the NGF to improve grasp pose detection results. Strengths: This paper considers how robotic arms can better select the next position to move towards the target object, which is rarely considered in current static two-finger grasping. I believe the authors' claim is important for the development of the robotics community. The authors have effectively combined NeRF with the significant concept of Graspness in recent years to study two-finger grasping, which I find to be an interesting topic. Weaknesses: Main concern: 1. First of all, I regret that the authors do not submit video materials in the supplementary materials. Although the authors analyze the real-world performance in the paper, the lack of videos makes it difficult to further verify the authenticity and effectiveness of the real-world experiments 2. The GraspNet-1Billion dataset includes two cameras, as mentioned in line L196 of the paper. However, in Table 1, the authors only provide experimental results for one camera, which I find insufficient. Testing on different cameras can more comprehensively demonstrate the effectiveness of the method. 3. Although the authors conduct sufficient comparison experiments to demonstrate the effectiveness of their method, I think they lack analysis experiments on some hyperparameters. For example, the authors set the maximum step to 10. I would like to ask the authors what the basis for setting it to 10 is, and what would happen if I used 7 steps or even fewer? 4. I think the recording of real-world experiments is insufficient. The authors should at least document the number of scenarios designed, the objects used in each scenario, and the average accuracy for each scenario. Simply recording an average success rate in Table 2 without detailing the experimental setup is, in my opinion, not rigorous. Additionally, there are no example images of cluttered scenes, making it difficult to assess the complexity of these scenarios. Typically, five objects do not constitute a cluttered scene; cluttered scenes usually consist of 10-15 or even more objects. 5. In section 4.5, I cannot intuitively see the clear performance advantages or disadvantages between Close-loop NBV and "Ours". In my opinion, the differences are due to the distinct training methods of the two planners, resulting in varied decision-making. I suggest the authors provide more distinguishable trajectory visualizations (e.g., whether the trajectory planning of Close-loop NBV causes collisions with objects, thereby disrupting the scene). Others: 1. In equation (3), the formula for Pn​ is missing a closing parenthesis. Additionally, equation (3) does not clearly indicate the elements of ray tracing that are being focused on. For example, in NeRF, estimating density only requires the 3D location, while estimating color involves both the 3D location and the 2D viewing direction. The authors do not clearly explain in equation (3) which information c(r), d(r), and g(r) utilize. 2. In Table 1, the term "All views" is unclear, and there is no corresponding explanation provided in lines L245-L250. 3. Although the authors conduct an ablation study on GraspnessSample, I find the ablation experiments insufficient. As cited in the paper, the effectiveness of Graspness is already thoroughly validated in the GSNet paper. Conversely, the authors do not provide sufficient ablation studies on other parts of the planner. I believe these other parts are the more critical aspects of this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author mention the limitations of the paper and suggest possible solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Response to Reviewer 4AgP** Thanks for your valuable feedback and we address your concerns below: **Q1: Video recording of real-world experiments.** Thanks for your suggestion. In **Figure 3** of the rebuttal PDF, we provide keyframe screenshots of one execution, including the active perception part and robotic grasping part. The videos will also be attached then. **Q2: Experiments on data captured from Kinect camera.** Thanks for your suggestion. We conduct additional experiments with the Kinect camera and make comparison with the previous SOTA method (i.e. ACE-NBV), as shown in **Figure 5** of the rebuttal PDF, demonstrating the effectiveness of our method under different cameras. Due to time and page limitation of the rebuttal, we only finish the experiments on the seen set and will supplement the results in the camer-ready version. **Q3: Analysis experiments on hyperparameters, such as the maximum NBV step.** In fact, we have presented the results with a varying number of steps for both the NBV policy comparison (Figure 4 in the original paper) and the Neural Graspness Sampling experiments (Figure 5 in the original paper). The horizontal axis denotes the number of planning steps. As the complete planning policy greedily chooses the view yielding the highest information gain at each step, the results with 7 or fewer steps can be directly inferred from the graph. Regarding the other hyperparameters, such as the number of iterations for NGF, we emprically know that they do not substantially influence the performance of our method. Anyway, we will improve our experiments on hyperparameter sensitivity in the next version. **Q4: More detials about the real-world experiment setting.** In this setting, we consider 5 scenes, each containing 5 objects, and for each scene, we repeat the experiment 3 times by changing the poses of the objects within the scene. We provide the detail recording of the real-world experiment in the table below. | Scene_id-Pose_id | 1-1 | 1-2 | 1-3 | 2-1 | 2-2 | 2-3 | 3-1 | 3-2 | 3-3 | 4-1 | 4-2 | 4-3 | 5-1 | 5-2 | 5-3 | Overall | | ---------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ------- | | Close-loop | 4 | 4 | 3 | 3 | 5 | 3 | 4 | 4 | 3 | 4 | 3 | 3 | 4 | 3 | 3 | 53/75 | | ACE-NBV | 4 | 2 | 3 | 4 | 3 | 3 | 3 | 3 | 4 | 3 | 2 | 2 | 4 | 4 | 3 | 47/75 | | Ours | 5 | 3 | 4 | 4 | 3 | 4 | 3 | 3 | 4 | 4 | 3 | 4 | 5 | 4 | 3 | 56/75 | Although we use a relatively small number of objects (5) to construct each scene, the objects in the scene are arranged in a crowded manner, with the presence of view occlusions. Therefore, our scene setting reflects the characteristics of cluttered scenes. In the rebuttal PDF, we provide the object setting of the real-world experiment scenarios in **Figure 4**. **Q5: Differences between the trajectories generated by Close-loop NBV and ours .** Close-loop NBV is a heuristic active perception method based on TSDF Fusion that calculates the number of potentially unobserved voxels as information gain without involving a training process. However, this type of methods do not directly relate to grasp detection, as the view selection primarily depends on the overlap between the candidate views and the observed region. Therefore, the trajectory generated by the Close-loop NBV planner tends to uniformly surround the scene to maximize the observation area. More examples are shown in **Figure 6** of the rebuttal PDF. **Q6: Problem about Equation (3).** We apologize for the missing closing parenthesis and any confusion caused by Equation (3). Regarding the calculation for ray color $\hat{c}(r)$, depth $\hat{d}(r)$, and graspness $\hat{g}(r)$, the NGF follows the previous NeRF-SLAM mapping method [1], where only the 3D location is involved. However, unlike the vanilla NeRF, which directly decodes the color and depth from a single MLP, the mapping framework that we employ first queries the position-corresponding feature from the learnable axis-aligned feature planes and uses separate MLPs to decode the color, depth, and graspness. This approach can significantly enhance the convergence speed. Although not introducing view directions when rendering RGB may lead to a slight decrease in quality, we typically only focus on depth and graspness for grasp detection. Therefore, the mapping system based on the NeRF-SLAM framework is suitable for NGF. **Q7: Meaning of "All views" in Table 1.** In Table 1 of the original paper, "All views" refers to using all the 256 views captured for a scene in the Graspnet-1Billion dataset to perform a complete reconstruction of the scene and then infer the grasping results. This result serves as an upper-bound reference for active perception methods. **Q8: Ablation study about Neural Graspness Sampling.** We would like to clarify although the definition of graspness is proposed in GSNet, our method significantly differs in the way of graspness generation and achieves better results. In GSNet, a grasp detection network is used to infer the graspness value of each point from the input point cloud, which depends on the quality of the point cloud and does not effectively utilize multi-view information. Our proposed neural graspness sampling directly queries the graspness value by the position from the NGF, which fully leverages the NGF's capability to accurately model the scene's grasp distribution, leading to improved results. Therefore, it is essential to conduct an ablation study on neural graspness sampling to validate its contribution. For other parts of the NBV planner, we will strive to provide ablation studies in the next version. **References** [1] ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of Signed Distance Fields, Johari et al., CVPR 2023 --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 4AgP Comment: Thanks to the author's reply, your experiments address most of my concerns, so I'm willing to raise my score to "borderline accept". Hopefully, you will be able to refine your ablation studies in the next version as mentioned in Q8's reply. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's feedback and are pleased that our response has addressed the concerns, resulting in a raised score. We will refine the ablation study and finish the Kinect camera experiment to refine our paper.
Summary: The paper introduces a novel framework utilizing a Neural Graspness Field (NGF) in conjunction with a pre-trained graspness prediction network to enhance active grasp detection. It applies online training to the NGF upon encountering a new scene view, producing RGB, depth, and graspness maps. The method involves computing information gains from potential views by assessing discrepancies between the NGF-generated and pre-trained network graspness maps to select the most informative views. During grasp inference, the framework samples grasp poses guided by NGF-derived graspness scores and samples them using the Farthest Point Sampling (FPS) method. Strengths: 1. The approach of leveraging graspness inconsistency to define information gain offers a targeted advancement for the robotic grasping task, potentially surpassing traditional methods that focus on uncovering occluded regions, as referenced in [1]. 2. Comprehensive evaluations are conducted through both simulated and real-world experiments, with comparisons against multiple baselines. These experiments demonstrate the framework's effectiveness in practical scenarios. [1] Breyer, Michel, et al. "Closed-loop next-best-view planning for target-driven grasping." 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022. Weaknesses: 1. A primary concern is the incremental training approach of the NGF. Since the NGF is trained incrementally, the rendered graspness maps \hat{g} and depth maps might be inaccurate for novel views at the initial stage, and thus are inappropriate to be used to compute information gain. 2. The NGF's grasp knowledge, being distilled from the pretrained graspness network, suggests that its maximum achievable performance may be inherently limited to that of the teacher network. How can you demonstrate that the proposed framework is better than the single-shot method? 3. According to Eqn(1), the graspness score of a position is defined as the mean grasp quality scores of different orientations on that position. When sampling grasp poses at the inference stage, it seems that they only sample positions for the grippers. How to determine the orientations? 4. Visual results (Figure 3) indicate that initial high information gains may be attributed more to background inconsistencies rather than the target object regions, potentially skewing the focus of planning efforts. The inconsistency of the background region should be neglected when planning. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Despite the reported rapid mapping operation time of 0.45 seconds in Table 3, the paper notes extensive training iterations for both initial and subsequent views (100 and 50 iterations). Further details on achieving such fast training times would be beneficial for understanding the feasibility of the NGF training process. 2. Equation (6) introduces confusion regarding the role of the summation symbol and the meaning of the variable $r$. Clarification of these components would aid in a better comprehension of the equation's intent and application. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Response to Reviewer peVt** Thanks for your valuable feedback. We understand your concerns regarding the computation of information gain with NGP and the other issues you have raised. We address your concerns below: **Q1: Incremental training approach of the NGF.** In the initial stage of NGF, there are indeed errors in both the rendered depth $\hat{d}$ and graspness $\hat{g}$ but have few influence on the calculation of information gain. Our inconsistency-guided NBV policy adopts the pseudo-label paradigm by substituting the ground-truth grasp distribution $g(d)$ with the pseudo-graspness $g(\hat{d})$, which is widely employed in other semi-supervised and active learning vision tasks [1,2]. The effectiveness of our NBV policy relies on the premise that the $g(\hat{d})$ predicted by the graspness network is closer to the ground-truth $g(d)$ compared to $\hat{g}$. The smaller error in $g(\hat{d})$ can be attributed to two factors: (1) The robot-mounted camera moves continuously in small steps, resulting in minimal differences in the observed geometric information between views, leading to insignificant errors in the rendered depth $\hat{d}$. (2) The graspness prediction network, trained on a large dataset of real point clouds, inherently provides robustness to depth noise. As a supplementary, we visualize the rendered graspness error $E_\hat{g} = |g(d)-\hat{g}|$ and the pseudo-graspness error $E_{g(\hat{d})} = |g(d)-g(\hat{d})|$ in **Figure 1** of the rebuttal PDF, where $E_\hat{g}$ is significantly larger than $E_{g(\hat{d})}$ and the difference decreases with more steps. **Q2: The maximum achievable performance of our method.** We are not certain the definition of "single-shot": If "single-shot" refers to using a single-view depth map for grasp detection, our active perception method aims to reconstruct the scene geometry and grasping representation with as few steps as possible. The scene point-cloud reconstructed by active perception is used for final grasp poses prediction,, resulting in significantly better performance. If "single-shot" refers to not using neural graspness sampling, it's important to note that while the grasping knowledge of NGF originates from the teacher network's output, NGF enhances the multi-view consistency and smoothness of the graspness distribution through implicit multi-view fusion. This is particularly beneficial when the single-view output is noisy and views are sparse, as demonstrated in similar scenarios such as 3D segmentation [3]. Therefore, sampling grasping positions from NGF yields better results compared to directly sampling from the network's graspness prediction. **Q3: The inference pipeline with neural graspness sampling.** We are sorry for the confusion caused. To clarify, we still rely on a grasp detection network to perform inference on the reconstructed scene point-cloud after active perception, rather than directly sampling grasp poses from NGF. For neural graspness sampling, we replace the graspness output of the grasp detection network with values obtained from NGF based on the position, while parameters such as rotation and gripper width are still inferred using the grasp detection network. **Q4: Visualization in Figure 3 in original paper.** In robotic grasping scenarios, it is difficult to divide the objects and background in 3D space from a single view perception. Therefore, given a candidate view, the graspness values of both foreground and background regions are used to compute the information gain. For our methods, not all selected views with high information gain arise from background errors. We provide more visualization results in the **Figure 2** of rebuttal PDF for illustration, where we use yellow rectangle box to highlight the forground inconsistency region. **Q5: The mapping time for our framework.** The mapping method is an extension of an efficient dense SLAM method [4] (citation [12] in our paper) that exploits axis-aligned feature planes and implicit Truncated Signed Distance Fields to reduce the number of training iterations, whereas previous dense neural mapping methods require more iterations for convergence. **Q6: Explaination about Equation(6).** We are sorry for the confusion caused. $g(r)$ means the rendered graspness of sampled ray $r$ and the summation symbol means the rendered graspness of the whole view. The definition follows Equation (3) in our paper and we will clarify it in the camera-ready version. **References** [1] Rethinking pseudo labels for semi-supervised object detection, Li et al., AAAI2022. [2] Learning From Synthetic Images via Active Pseudo-Labeling, Song et al., TIP2020 [3] In-Place Scene Labelling and Understanding with Implicit Scene Representation, Zhi et al., ICCV2021. [4] ESLAM: efficient dense SLAM system based on hybrid representation of signed distance fields, Johari et al., CVPR2023. --- Rebuttal 2: Comment: I appreciate the authors' response, which addresses most of my concerns and helps me understand this work better. Here are some of my follow-up questions: 1. Since you already have the observed depth map, why do you use $g(\hat{d})$ to guide the training of $\phi_g(p)$ rather than $g(d)$? 2. For Q2, 'single-shot' refers to the first case mentioned above by the authors. It seems the single-shot method [1] already achieves better APs compared to the results reported in Table 1. Could the authors elaborate on the advantages of the proposed method compared to [1]? [1] Wang, Chenxi, et al. "Graspness discovery in clutters for fast and accurate grasp detection." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. --- Rebuttal Comment 2.1: Comment: Thanks for your feedback and we are pleased that our response addresses most of your concerns. As for the follow-up questions, we address them below: **Q1**: We use $g(d)$ to guide the training of $\phi_g(p)$ because we observed that after mapping a view, the view-rendered depth $\hat{d}$ has better quality than the original depth map observed by the camera, with fewer missing points and smoother surfaces. Using the rendered depth as input to calculate view graspness achieves better results than directly using the original depth. Based on this observation, we employ a two-stage training process. First, we optimize the RGB and depth. Next, we use the rendered depth to calculate $g(\hat{d})$ for the guidance of graspness. **Q2**: We understand your concern about the final performance, where our results reported in the paper are lower than GSNet, which only utilizes single view point-cloud as input. Firstly, we want to clarify that the target of GSNet and our method is quite different. Our method aims to build a complete 3D representation for grasp detection rather than design a grasp detection network, which means our method can be applied to any grasp detection method with 3D information as input. With actively reconstructed 3D information, grasp detection methods can achieve better performance especially for the unseen objects whose shape deviate from those in the training set. Unseen objects are not incorporated in the training process so the grasp detection netowrk struggles with the single-view input, where the geometry information is incomplete and noisy. Besides, utilizing multi-view information for 3D reconstruction can make the grasp detection method work on transparent and specular objects [1], while the single view depth map is in very low quality. For the final result, in our original paper, we use the baseline grasp detection network from [2] rather than GSNet for evaluation. For a fair comparison, we provide the results inferred by pre-trained GSNet here. CD stands for collision detection. With the input reconstructed by 10 steps of active perception, GSNet performs better on similar and novel sets while showing a performance drop on seen objects. Our reconstructed data is not involved during training, so we tend to attribute the performance drop on seen objects to the distribution gap between reconstructed and single-view point clouds. We will attempt to include the reconstructed point clouds for training to achieve better results in the future. | Method | Seen-AP | Similar-AP | Novel-AP | | --------------------- | --------- | ---------- | --------- | | GSNet-singleview | **65.70** | 53.75 | 23.98 | | GSNet-10 steps | 57.62 | **55.42** | **24.53** | | GSNet-singleview + CD | **67.12** | 54.81 | 24.31 | | GSNet-10 steps + CD | 61.78 | **61.60** | **26.55** | **References** [1] Graspnerf: Multiview-based 6-dof grasp detection for transparent and specular objects using generalizable nerf, Dai et al., In *IEEE International Conference on Robotics and Automation*, 2023. [2] Generalizing 6-dof grasp detection via domain prior knowledge, Ma et al., In *IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 2024. --- Reply to Comment 2.1.1: Comment: Dear Reviewer peVt, We appreciate your previous feedback and have provided our response to your questions. As the discussion phase is nearing its end on August 13, we want to check if you have had the opportunity to review our latest response. If you have any further concern or require additional clarification, we are willing to address them within the remaining discussion period. Thank you for your time and consideration.
Summary: This work proposes an active perception method for grasp detection composed of two parts: neural graspness field mapping and next-best-view planning with a graspness inconsistency-guided strategy. And a corresponding inference strategy is also proposed by decoding the graspness score from NGF to generate grasp samples. The evaluation benchmark is constructed based on the GraspNet-1B benchmark and the experiment results indicate consistent improvements on the seen, similar and novel sets. Strengths: 1. This work is well motivated: aiming to settle both the negative effect of incomplete 3D geometry information for learning-based grasp detection methods and the time costs of scanning the whole scene, and find a trade-off. 2. Experiments on GraspNet-1B demonstrate performance improvements compared to previous methods. Weaknesses: 1. Relatively small performance improvement on the novel set compared to ACE-NBV. 2. Generalization and distraction experiments are not provided to verify model's robustness. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you account for why with plan step increases, for the novel set, your method encounters a significant drop in performance? 2. Could you provide further generalization and distraction experiments to verify your model's robustness? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The NGF-based method may be constrained to static environments and cannot adapt to dynamic scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to Reviewer DQft** Thanks for your valuable feedback. We appreciate your acknowledgment of our active perception method based on neural graspness field and the performance improvements achieved. We address your concerns below: **Q1: The relative weak improvement on novel set.** In our experiments, the improvement of the proposed method compared to other NBV strategies is less pronounced on the novel set than on the seen and similar sets. We attribute this to several factors: (1) The grasp detection model for inference demonstrates relatively lower performance on the novel set compared to the seen and similar sets. This inherent limitation constrains the potential improvements through active perception. (2) The graspness prediction network we employed has not been exposed to objects from the novel set during training. Consequently, the view graspness prediction used for training the NGF may lack accuracy for novel objects. This potential inaccuracy in graspness prediction may impact the performance of our active perception approach. Our method also experiences a performance drop between step 3 and 4 in Figure 4 (c) of the original paper. We attribute this primarily to the grasp detection model's suboptimal performance on the novel set, which leads to a scarcity of positive grasp samples and makes the final results unstable. As the number of view gradually increases, our active perception method still achieves stable performance improvement on novel set. **Q2: Generalization and distraction experiments.** Thanks for your suggestion. In future work, we will strive to supplement our study with ablation experiments on different modules and hyperparameters of our method to demonstrate its robustness and reliability. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my concerns. I will keep my score and hopefully, you can add generalization and distraction experiments as you mentioned in your answer to Q2. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback and suggestion. We will add the ablation studies mentioned in the answer of Q2 in the next version.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful feedback on our submission. We are grateful for their acknowledgment of our paper's contribution to active perception in robotic grasping. To address the questions raised regarding our method design and experimental work, we have provided comprehensive explanations aimed at clarifying the reviewers' concerns. We have also included a rebuttal PDF containing the figures referenced in our response. We summarize the key points of our response below: 1. **The calculation of information gain:** Reviewer peVt expressed concern about the accuracy of the information gain calculation, given potential errors in rendered depth and graspness. Reviewer DbvE also mentioned possible errors in rendered depth for unseen views. In our response, we have provided a detailed illustration addressing the influence of these potential errors to view planning and included a numerical analysis in the rebuttal PDF. 2. **Neural graspness sampling**: Reviewer 4AgP and DbvE expressed confusion about how we use neural graspness sampling for grasp poses inference. Our original paper may be unclear, leading to misunderstanding. In our response, we have provided a detailed explanation to clarify this point. 3. **Real-world experiment setting:** Reviewer 4AgP provided many valuable questions and suggestions on our real-world experiments. We have supplemented our response with additional details about the real-world scene settings and comprehensive results. Furthermore, we have included keyframe screenshots from the video of our method being executed on a real robot in the rebuttal PDF. Pdf: /pdf/51778ca0bf2b2d8b60e690e03a69f340f79aecd6.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting
Accept (poster)
Summary: The proposed method extends the idea of GaussianFlow to explicitly decouple camera motion and object motion from optical flow on input monocular video. Specifically, the iterative camera pose refinement further boosts the rendering quality and performance on various datasets. Extensive qualitative and quantitative results are shown in the paper to valid the proposed contributions. Strengths: 1. The paper extends the idea of GaussianFlow to explicitly decouple the camera motion and object motion from optical flow, which is reasonable and the formulation of the proposed method also makes sense. 2. Explicitly modeling camera motions in an iterative way during training is useful for monocular NVS because the rendering error partially brought by incorrect camera pose estimation could be further decreased. Though the camera pose refinement is not originally introduced by the paper, the decoupling scheme is highligh. 3. Ablation study (both the main paper and appendix) is detailed and inspiring, validating the main contributions claimed in the paper. 4. Both qualitative and quantitative results look good. Weaknesses: No prominent weakness is witnessed. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Since the paper showcased the failure case in Fig.13, I would like to know whether the results could be improved if the camera refinement step was removed (leaving that optical flow equals to motion flow) considering camera poses in DyNeRF dataset are fixed? (Original GaussianFlow paper minimized the difference between optical flow and Gaussian flow, where the proposed method minimized the difference between Gaussian flow and motion flow, which is optical flow - camera flow given camera flow in DyNeRF is 0). Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Limitations have already been included and acknowledged in the paper. The proposed method seems to be less effective on cases with less camera motions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Section 4 Response to Reviewer 2xXk We thank the reviewer for the constructive assessment of our work. In the subsequent sections, we respond to each concern in detail. Please feel free to use the discussion period if you have any additional questions. ## 4.1 Questions ### 4.1.1 Failure case in DyNeRF dataset Thank you to the reviewer for the constructive feedback. As discussed in our response to reviewer 4hTV, the failure on the DyNeRF dataset primarily stems from incorrect computation of the camera flow. More specifically, this issue arises from inaccurate depth estimation. As illustrated in Figure D of the PDF attachment, our method renders incorrect depth, leading to noticeable artifacts. Theoretically, removing the camera refinement step and directly deriving the Gaussian flow from the optical flow can avoid introducing erroneous depth. Unfortunately, after conducting experiments as suggested by the reviewer, the rendered scenes still exhibit certain artifacts, which typically appear only in the test views and not in the training views. We speculate that this issue arises because of the sparse viewpoint setup in the DyNeRF dataset, which is composed of videos captured by several fixed-position cameras rather than a single monocular video. These sparse viewpoints may lead to suboptimal Gaussian initialization and cause the model to overfit to the training views during subsequent optimization. Nevertheless, we sincerely thank the reviewer for this insightful ideas, which inspired us to think more deeply about the possible limitations of our method. For scenes with stationary cameras, using reliable depth priors to provide regularized constraints may be a potential solution to alleviate this ill-posed problem, such as the recent work MoDGS[10]. In future work, we aim to combine sparse-view 3DGS methods to enhance the quality of dynamic scene reconstruction under sparse viewpoint conditions. --- Rebuttal Comment 1.1: Title: Good job Comment: I am satisfied with the response and I will keep my positive score. --- Reply to Comment 1.1.1: Title: Thanks to reviewer 2xXk Comment: Dear Reviewer 2xXk, We sincerely appreciate your recognition of our work and the valuable feedback you provided. Your detailed comments and constructive suggestions have not only helped us improve the manuscript but also offered significant guidance for our future research. Thank you once again for your time and effort in reviewing our work. Best regards, Authors of Submission 3272
Summary: This paper proposes using off-the-shelf 2D optical flow to supervise the deformation field for 3D Gaussian Splatting (3DGS) in dynamic scenes. The optical flow is decomposed into camera flow and motion flow. The 3DGS flow is projected into 2D to match the estimated flow. Camera pose and Gaussian parameters are optimized iteratively. Experiments demonstrate the effectiveness of the proposed method. Strengths: 1.The formulation and utilization of optical flow are reasonable, and the experiments demonstrate its effectiveness. Weaknesses: 1.The idea of using optical flow to guide dynamic 3DGS modeling is not particularly novel; it seems more like an engineering effort. 2.With the additional 2D optical flow supervision, the improvement shown in the experiments is subtle. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.Without explicitly modeling reflections, how does it perform well on the NeRF-DS dataset, even outperforming NeRF-DS with additional modeling? 2.With additional optimizations, how much longer would the training time be compared to the baseline and other methods? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, other limitations are listed in the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Section 3 Response to Reviewer pL6e We thank the reviewer for the constructive assessment of our work. In the subsequent sections, we respond to each concern in detail. Please feel free to use the discussion period if you have any additional questions. ## 3.1 Weaknesses ### 3.1.1 Novelty We appreciate your detailed review of our work. We speculate that there might be some misunderstandings about the novelty and contributions of our work, and we'd like to make further clarification. Our primary contribution is the separation of camera motion and object motion in optical flow, which provides clear motion guidance for dynamic Gaussians. Specifically, our method begins by acquiring an optical flow prior through an off-the-shelf optical flow estimation network. It then calculates the optical flow caused solely by camera motion by integrating depth and camera pose. Finally, it isolates the optical flow attributed to object motion to constrain the deformation of 3D Gaussians. This approach effectively creates a precise correspondence between the 2D motion prior and the 3D Gaussian motion. Therefore, it is not entirely accurate to characterize our work as merely applying optical flow supervision to Gaussian motion. As shown in Table 3 of our main paper, directly constraining the Gaussians and deformation fields with optical flow even results in performance degradation. We believe this is because the camera motion incorrectly influence the deformation of dynamic Gaussians, while our method effectively addresses this issue. Previous research on motion in NeRF and 3DGS fields, such as [7] and [8], have not addressed the interplay between camera and object motion from the perspective of optical flow formation. In contrast, our method first proposes to decouple optical flow for motion guidance, presenting a novel solution for dynamic scene rendering. ### 3.1.2 Performance gains We appreciate the reviewer's examination of our work. We would like to provide a detailed clarification and further explanation regarding our performance contributions. Previous dynamic Gaussian methods often struggle to achieve accurate reconstruction in dynamic scenes with complex motion and imprecise poses, sometimes even failing to recover the structure of dynamic objects, as illustrated in Figure 6 of our main paper. By incorporating both motion guidance and camera pose optimization, our approach achieves more accurate reconstruction in complex dynamic scenes, as demonstrated in Table 1 and 2 of our main paper (with a 0.93dB mean PSNR increase in NeRF-DS and 2.3dB mean PSNR in HyperNeRF). The ablation study also indicates that our method significantly improves the rendering quality in scenarios with complex motion and inaccurate poses. ## 3.2 Questions ### 3.2.1 Modeling reflections We appreciate the reviewer's questions. Indeed, we do not explicitly model reflections like NeRF-DS does. However, our approach has two advantages over NeRF-DS: - Effectiveness of the deformable 3DGS Framework: The 3DGS technique represents the scene as a set of anisotropic 3D Gaussians and employs an efficient differentiable rasterizer for rendering, achieving high-quality and real-time results [9]. On multiple datasets, 3DGS has matched or surpassed the performance of previous state-of-the-art NeRF methods. Our baseline method extends the rendering quality advantages of 3DGS to dynamic scenes by incorporating deformation fields into the 3DGS framework. As shown in Table 1, even our baseline method achieves rendering performance comparable to NeRF-DS methods without explicit modeling of reflections. The anisotropic 3D Gaussian spheres and the deformation field can model complex geometric details over time, enabling the baseline method to deliver high-quality rendering in dynamic scenes. - Effectiveness of Motion Guidance: While the baseline method delivers satisfactory rendering in most cases, there are still limitations in some challenging dynamic regions (e.g., the plate in Figure 5 of our main paper). Our method enhances performance in dynamic scenes by providing reliable motion constraints to the Gaussian deformation. By accurately separating and constraining different motion components in dynamic scenes, our method performs exceptionally well even in foreground areas with complex textures and reflections. ### 3.2.2 Training time We appreciate the reviewer's question. We recognize that training time is a crucial factor in assessing the practicality of a method. Below, we provide detailed data of training times for our method on the NeRF-DS dataset: ## Training Time on the NeRF-DS Dataset | Training Time | As | Basin | Bell | Cup | Plate | Press | Sieve | |:-------------------------|:-------|:--------|:-------|:-------|:--------|:--------|:--------| | Baseline | 1h 1m | 1h 11m | 1h 42m | 1h 3m | 1h 0m | 0h 51m | 0h 57m | | Ours-w/o pose refinement | 1h 8m | 1h 15m | 1h 53m | 1h 13m | 1h 6m | 0h 58m | 1h 3m | | Ours | 1h 33m | 1h 46m | 2h 34m | 1h 34m | 1h 30m | 1h 17m | 1h 25m | | NeRF-DS | 6h 43m | 6h 48m | 6h 49m | 6h 50m | 6h 53m | 6h 48m | 6h 47m | Our method exhibits a slight increase in training time compared to the baseline method. This is primarily due to the inclusion of a differentiable Gaussian flow rasterizer[8] and the process of camera pose refinement. Notably, since the proposed improvements only influence the training process, our method maintains same real-time rendering speeds with our baseline during inference. --- Rebuttal Comment 1.1: Title: Replying to Rebuttal by Authors Comment: Thanks for clarification about the contribution: camera and object motion sepation from the estimated optical flow, especially for the cases with complex motion and inaccurate poses. Given the additional analysis and training time comparison, I would like to raise the score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer pL6e Comment: Dear Reviewer pL6e, Thank you for improving the score of our paper. We are glad that our clarification of the contribution of camera and object motion separation was helpful and appreciate your recognition of our additional analysis and training time comparison. We will make sure to present these aspects clearly in the revised manuscript to highlight the effectiveness and efficiency of our approach. Thank you again for your feedback and for recognizing the contribution of our work. Best regards, Authors of Submission 3272
Summary: The paper proposed MotionGS a novel deformable 3D Gaussian splatting approach. The approach initializes camera poses and 3D Gaussians based on an analytic structure-from-motion method as 3DGS. In addition an optical flow network is used to compute optical flow between neighboring frames. Given the initial depth of the 3D Gaussians, the paper proposes to use an optical flow decoupling module separating the camera and object motion. This process is optimized to obtain better camera poses and better views through 3D Gaussian splatting and then iterated. The paper uses Gaussian flow which has been very recently proposed by Gao et al., 2024, however only matching the flow due to object motion. The iterative improvement of the camera poses is based on the estimation of small residuals between consecutive views. The paper reports experiments on the NeRF-DS and the HyperNeRF dataset and achieving high construction quality and improving of Deformable-3DGS. An ablation study is presented comparing object motion flow with optical flow without separation and the effects of camera pose refinement. The supplemental material contains more details on Gaussian flow, more implementation details, visualizations and discussions. The appendix also contains a further ablation study quantifying the influence of the depth, optical flow etc. Strengths: Explicit motion guidance for 3D Gaussian reconstruction is a logical next step for deformable 3D Gaussians. The paper combines and extends some very recent parallel work in optimization of camera pose estimation by Fu et al., 2024 and optical flow based Gaussian flow by Gao et al., 2024. The paper demonstrates the successful application of the proposed MotionGS framework to non-rigid scenes recorded by video streams. The reported reconstruction quality of MotionGS is competitive and often state-of-the-art for videos in the chosen datasets. Weaknesses: The paper has a large range of limiting assumptions which are not always made clear. COLMAP must find an initial solution for camera pose estimation which requires enough static features in the scene. A video input stream is needed to ensure that neighboring viewpoints are very close for optical flow estimation to succeed. The 3DGS canonical representation must be found in order for depth estimation to work. The datasets used in evaluating the proposed method can be processed with deformable 3DGS and hence the proposed MotionGS method is limited to improving reconstruction quality. No reconstruction beyond the limitation of prior work is demonstrated and the above limiting assumptions makes this likely challenging. The quality of camera pose estimation is not well evaluated. There is only a visualization in the appendix which is very hard to interpret. (See Fu et al. 2024 for a more meaningful evaluation). The proposed method appears to be quite brittle as can see from the ablation study in Table 5 in appendix A.3. The method falls below the baseline by leaving out the motion mask, switching to single image depth estimation and even by switching to another SOTA flow estimation network. Technical Quality: 3 Clarity: 3 Questions for Authors: The key for MotionGS appears to be the separation of optical flow into object flow and camera flow. The paper does not really review work in this area, would there other ways to attribute the flow to object and camera motion? If Deformable 3DGS fails to find a reasonable canonical representation at initialization, is there a fallback to start the iterative optimization? On which data was the optical flow network trained on, and has the optical flow network been finetuned on the data? Would self-supervised optical flow methods which are known to generalize better, improve the flow estimation? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors provide a limitation statement in Appendix A.5 but focused only on optical flow failures. The claim of optical flow failure for static cameras is surprising, maybe a shortcoming of the particular flow estimation. It would be more interesting to investigate the robustness to large motions which likely needs to severe flow failures. In addition the limitation of structure-from-motion and depth estimation are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Section 2 Response to Reviewer 4hTv We thank the reviewer for the constructive assessment of our work. In the subsequent sections, we respond to each concern in detail. Please feel free to use the discussion period if you have any additional questions. ## 2.1 Weaknesses ### 2.1.1 Limiting assumptions Please see Section 0.1 for details. ### 2.1.2 Beyond the limitation of prior work We appreciate the reviewer's careful examination. Indeed, in some challenging scenarios, our baseline method is confined to reconstructing static backgrounds and exhibits limited performance on dynamic objects. As shown in Figure 6, the baseline method fails to capture even the basic structure of the broom. The baseline method also acknowledges this limitation and attributes it to inaccuracies in camera poses. Therefore, we propose camera pose refinement module to iteratively optimize the 3D Gaussians and camera poses. As a result, MotionGS is more capable of handling scenes with complex motions and inaccurate camera poses than our baseline method. ### 2.1.3 The quality of camera pose estimation Please see Section 0.3 for details. ### 2.1.4 Ablation study in Table 5 We appreciate the reviewer's detailed analysis of our work. Here, we provide a detailed explanation and clarification regarding the ablation study presented in Table 5: 1. **The Role of the Motion Mask:** - The motion mask plays a critical role in our pipeline. As discussed in Section 4.2 of the main paper, our method benefits more from the motion mask than previous approaches[7]. Specifically, our method effectively utilizes the motion mask to filter out unreasonable motion flow in static areas. In contrast, when only optical flow supervision is used, the motion mask cannot be directly applied. 2. **Limitations of Monocular Depth Estimator:** - Our ablation experiments show that using an off-the-shelf monocular depth estimator introduces a performance drop. We argue that this is due to the scale ambiguity of monocular depth estimators, which results in inaccurate camera flow and motion guidance. In contrast, depth maps rendered by our method provide accurate scales and better details, as shown in the Figure B of the PDF attachment. 3. **Impact of Optical Flow Networks:** - The performance drop observed when switching optical flow networks is indeed unexpected. With further analysis, we find that FlowFormer performed inadequately in some challenging scenes, especially for the "plate" scene, resulting in an overall decrease in performance. Since GMFlow is trained on more datasets, it may have stronger zero-shot generalization capability than Flowformer. ## 2.2 Questions ### 2.2.1 Related work on optical flow decoupling Please see Section 0.2 for details. ### 2.2.2 Fallback to start the iterative optimization Thank you for this important question. Like our baseline method, we freeze the deformation field during the Gaussian initialization to obtain a reliable canonical representation. In most tested datasets, this canonical representation can be initialized well. Even with inaccurate camera poses, our method can render satisfactory results through iterative optimization. Currently, we have not yet explored extreme cases (e.g. COLMAP failure) or developed alternative initialization strategies to replace COLMAP. ### 2.2.3 Optical flow network Thanks for your question. The GMFlow model[4], is trained on a range of datasets (KITTI, HD1K, FlyingThings3D, and Sintel). We use the pre-trained model from the original paper without further fine-tuning on our datasets. ### 2.2.4 Self-supervised optical flow methods Thank you for the valuable suggestion. We used the self-supervised optical flow estimation algorithm MDFlow[5] in our ablation experiments. The results show that the motion constraints provided by the self-supervised optical flow estimation network cannot bring effective improvement to the network. ## Novel View Synthesis Results of NeRF-DS Dataset | Method | SSIM↑ | PSNR↑ | LPIPS↓ | |:--------------|-------:|-------:|-----------:| | baseline | 0.8394 | 23.61| 0.1970 | | ours(w/o pose refinement) | **0.8609** | **24.12**| **0.1763** | | Self-supervised flow estimator | 0.8308 | 23.25| 0.2137 | We speculate that this is because self-supervised methods cannot provide sufficiently accurate optical flow priors. With the increasing amount of annotated data and the development of foundational models in recent years, the generalization ability of fully-supervised optical flow estimation methods has significantly improved, surpassing traditional self-supervised methods. Similarly, works like MonoNeRF[6], Dynpoint[3], and DynIBaR[7] also use fully-supervised optical flow networks for optical flow priors.This phenomenon may also illustrates the importance of accurate motion priors, while erroneous or noisy motion constraints may even have a negative effect on the optimization. ## 2.3 Limitations We appreciate the reviewer's insightful feedback. We recognize that the term "instability in optical flow computation" mentioned in our paper might have introduced some confusion. Upon further analysis, we find that the fixed and sparse camera viewpoints in the DyNeRF dataset hinder accurate depth rendering, affecting subsequent camera flow calculations and leading to artifacts as shown in Figure B of the PDF attachment. The inaccuracies in motion flow primarily comes from the inaccuracy of the camera flow, rather than a failure of the optical flow estimation itself. It is also important to clarify that the DyNeRF dataset is not continuous monocular video but rather dynamic scenes with sparse viewpoints, which posed challenges to the canonical 3D Gaussian initialization. This poor initialization prevents our method from rendering accurate depth and performing subsequent optimizations, and in future work we consider combining sparse-view 3DGS methods to improve the robustness of our MotionGS. --- Rebuttal Comment 1.1: Title: Detailed Rebuttal Comment: I thank the authors for their detailed rebuttal with additional experiments, figures and tables. The performance drop with self-supervised MDFlow is very interesting. In general, I find the answers informative and helpful. My additional comments are as follows: I think, it would be helpful to readers (especially those less familiar with 3DGS) to include the discussion in Section 0.1 into the paper. I find the explanation of the reduced performance with monocular depth estimation both interesting and very reasonable. Ideally, one could evaluate improvement of camera poses with synthetic data where ground truth is known but I acknowledge the difficulty as real-world challenges of using COLMAP must be included at the same time. Maybe, this can be left to future work. In my experience, it is not uncommon for SfM to completely fail in very dynamic real-world scenes. I appreciate that imprecise pose estimates can be improved, however, depending on the imagery complete failure is a challenge. Again maybe something for future work. --- Reply to Comment 1.1.1: Title: Response to the additional comments by Reviewer 4hTv Comment: Dear Reviewer 4hTv, Thank you for your positive feedback on our rebuttal and your insightful comments. Here is our response to your additional suggestions: 1. Including the Discussion in Section 0.1: We appreciate your suggestion to include the discussion from Section 0.1 in the paper. We have now integrated this discussion into L211-L219 (Section 4.2) in our main paper, to provide a clearer context and aid in the understanding of our contributions. 2. Explanation of Reduced Performance with Monocular Depth Estimation: Thank you for your approval of our explanation. We have ensured that this explanation is well-integrated into the revised paper. 3. Evaluating Camera Pose Improvement with Synthetic Data: We agree that evaluating the improvement of camera poses using synthetic data, where ground truth is available, would be ideal. We appreciate your suggestion and have noted it as our future work. We have also added a brief discussion in the limitation to reflect this point. 4. Challenges with SfM in Dynamic Real-World Scenes: We concur with your observation regarding the challenges of structure-from-motion (SfM) in highly dynamic real-world scenes. Indeed, complete failures of SfM are a known issue in such scenarios. While our method can improve imprecise pose estimates, it cannot fully mitigate the risk of complete failure, especially under extreme conditions. We have highlighted this limitation in the revised paper. Thank you once again for your professional and constructive comments and recognition of our work, which is crucial to further improving the quality of our work. Best regards, Authors of Submission 3272
Summary: This paper presents a novel approach to dynamic scene reconstruction by incorporating explicit motion priors into 3D Gaussian Splatting (3DGS). The proposed framework, MotionGS, introduces an optical flow decoupling module that separates camera flow and motion flow, which respectively correspond to camera movement and object motion. This separation allows for more precise motion guidance during the deformation of 3D Gaussians. Additionally, a camera pose refinement module is implemented to alternately optimize 3D Gaussians and camera poses, addressing inaccuracies in camera pose estimation. Extensive experiments demonstrate that MotionGS outperforms state-of-the-art methods on datasets such as NeRF-DS and HyperNeRF, achieving significant improvements in both qualitative and quantitative results for dynamic scene reconstruction. Strengths: 1. The paper is written clearly: The authors present a well-structured flow from the problem definition, through the intuition behind the approach, to the implementation and analysis. Each section is closely connected to the main point, making the paper easy to follow. 2. The proposed method is novel, intuitive, and simple to implement: MotionGS introduces a unique approach by decoupling optical flow into camera flow and motion flow, providing explicit motion guidance for 3D Gaussian Splatting. This method is innovative and straightforward, making it easy to adopt and integrate into existing systems. 3. Experiments validate the results well, with extensive visualizations and ablation studies: The paper robustly validates the proposed method with extensive experiments on datasets like NeRF-DS and HyperNeRF. Numerous visualizations and detailed ablation studies clearly demonstrate the effectiveness of each component of MotionGS. Weaknesses: 1. The time, memory, and storage costs are not revealed: While the proposed method is illustrated as a simple solution, it is crucial to analyze the additional computational burden it imposes. The paper does not provide an analysis of the time, memory, and storage requirements, which are important factors in evaluating the practicality of the method. 2. The data sampling mechanism is missing: The paper lacks details on the data sampling mechanism used for training. It is unclear how image pairs are sampled—whether t and t+1 can be any two frames in the video or they must be adjacent to each other. If the former is true, it would imply that N(N−1) optical flow maps need to be calculated, which could significantly impact storage and time requirements. Providing these details is essential to understand the overall efficiency and feasibility of the approach. Technical Quality: 3 Clarity: 3 Questions for Authors: If no flow net is adopted, is it possible to calculate the loss as follows? 1. Estimate optical flow as the sum of camera flow and motion flow 2. Using optical flow to warp I_t+1, and calculated the loss with I_t. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Section 1 Response to Reviewer rwSL We thank the reviewer for the constructive assessment of our work. In the subsequent sections, we respond to each concern in detail. Please feel free to use the discussion period if you have any additional questions. ## 1.1 Weaknesses ### 1.1.1 Time, memory, and storage costs Thank you for your question. In Table 4 of Appendix A.2, we have provided a detailed breakdown of the rendering speed and storage requirements of our method on the NeRF-DS dataset. Our approach aims to introduce explicit motion constraints and pose refinement during training without bringing additional burden to the original Deformable-3DGS during inference. Thus, the rendering speed and inference time of our method are consistent with our baseline. For model training, we list the training time per scene and peak memory usage on the NeRF-DS dataset as below, providing a comprehensive assessment of resource usage during training. Compared to our baseline, our approach incurs increased training time and peak memory usage. This is primarily due to the additional rendering of Gaussian flow and the refinement of camera poses, which are necessary for our method. ## Training Time on the NeRF-DS Dataset | Training Time | As | Basin | Bell | Cup | Plate | Press | Sieve | |:-------------------------|:-------|:--------|:-------|:-------|:--------|:--------|:--------| | Baseline | 1h 1m | 1h 11m | 1h 42m | 1h 3m | 1h 0m | 0h 51m | 0h 57m | | Ours(w/o pose refinement) | 1h 8m | 1h 15m | 1h 53m | 1h 13m | 1h 6m | 0h 58m | 1h 3m | | Ours | 1h 33m | 1h 46m | 2h 34m | 1h 34m | 1h 30m | 1h 17m | 1h 25m | | NeRF-DS | 6h 43m | 6h 48m | 6h 49m | 6h 50m | 6h 53m | 6h 48m | 6h 47m | ## Max GPU Memory Usage on the NeRF-DS Dataset | Max GPU Memory (GB) | As | Basin | Bell | Cup | Plate | Press | Sieve | |:---------|------:|--------:|-------:|------:|--------:|--------:|--------:| | Baseline | 15.67 | 13.61 | 15.97 | 15.29 | 9.66 | 10.65 | 12.17 | | Ours | 16.61 | 14.52 | 17.73 | 15.7 | 10.62 | 11.62 | 12.97 | ### 1.1.2 Data sampling mechanism Thanks for your question. Here is a more detailed explanation of our data sampling mechanism: - **Data Sampling Strategy**: We adopt the same data sampling strategy as the baseline method, i.e., reading image sequences in a randomly shuffled order. For an N-frame video, the frames are shuffled and then read sequentially. In each iteration, we read two frames and caluate the optical flow between them. To enhance efficiency, the second image from the last iteration is used as the first image in the current iteration. Thus, except for the first iteration, only one new image is read in each subsequent iteration. Consequently, there are (N-1) iterations per epoch, with optical flow computed once in each iteration. This strategy balances the introduction of accurate motion priors with maintaining training efficiency. - **Optical Flow Calculation and Storage Strategy**: During the first epoch of training, we calculate the optical flow for all adjacent frame pairs, resulting in a total of (N-1) optical flow maps. In subsequent epochs, we do not reshuffle the image sequence, allowing us to reuse the optical flow maps calculated in the first epoch. This effectively eliminates the need to recompute optical flow maps in each epoch, significantly reducing computational overhead. Taking the "as" scene (846 frames) in the NeRF-DS dataset as an example, the optical flow calculation for each pair takes approximately 22 ms, resulting in a total computation time of 18.7 seconds. Each optical flow map requires around 0.99 MB of storage, resulting in a total storage requirement of 835.13 MB. ## 1.2 Questions ### 1.2.1 Self-supervised flow loss We appreciate the insightful suggestion provided by the reviewer. Following the reviewer's suggestion, we conduct experiments to evaluate this approach and have confirmed its effectiveness. Specifically, we estimate the Gaussian flow corresponding to the optical flow and use it to warp the $I_t$ frame. We then compute the photometric loss with the $I_{t+1}$ frame. In experiments on the NeRF-DS dataset shown as below, this method outperforms our baseline but is less effective compared to our proposed method. ## Novel View Synthesis Results of NeRF-DS Dataset | Method | SSIM↑ | PSNR↑ | LPIPS↓ | |:--------------|-------:|-------:|-----------:| | baseline | 0.8394 | 23.61| 0.1970 | | ours(w/o pose refinement) | **0.8609** | **24.12**| **0.1763** | | Self-supervised flow loss | 0.8474 | 23.76| 0.1807 | We hypothesize that the discrepancy arises because the self-supervised loss may not provide accurate supervision in areas with similar colors. Nevertheless, it is evident that employing self-supervised optical flow loss can reduce dependence on off-the-shelf optical flow estimation. We provide qualitative experimental results in Figure A of the PDF attachment, which show that this idea can effectively provide motion constraints. When an optical flow estimation network is either unavailable or inaccurate, this approach can serve as a valuable alternative to improve rendering quality. We sincerely appreciate your valuable idea, which has greatly enhanced our understanding to the problem. We are excited to incorporate this perspective into our research. --- Rebuttal Comment 1.1: Title: Issues addressed well Comment: All my concerns have been addressed well. Besides, I appreciate that the authors have evaluated the self-supervised setting. I adjust my score to 7 and suggest putting this extra try on the supplementary material. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer rwSL Comment: Dear Reviewer rwSL, Thank you for your recognition and adjustment of the score. We are glad that our rebuttal effectively addressed your concerns. We also appreciate your recognition of our efforts in evaluating the self-supervised setting. We have incorporated your suggestions, and the supplementary material now contains detailed descriptions, figures, and tables related to the self-supervised experiments. Thank you again for your constructive feedback and guidance throughout the review process. Best regards, Authors of Submission 3272
Rebuttal 1: Rebuttal: # Section 0: Response to all reviewers We would like to extend our heartfelt gratitude to all the reviewers for their thorough evaluation and constructive feedback on our work. Their insights have been invaluable in refining and enhancing the quality of our research. Below, we provide additional explanations and clarifications, along with a list of references cited during the rebuttal period and an attached PDF containing supplementary figures. ## Section 0.1: Clarifications on Assumptions and Preconditions To provide a clearer understanding of our approach, we would like to further explain the assumptions and preconditions underlying our method: 1. **Use of COLMAP:** - Like many 3DGS-based methods, our approach relies on COLMAP for initial camera pose estimation, which requires the presence of sufficient static features in the scene. Typically, such static features are widely present in most real-world scenarios, especially in background areas. Moreover, even if the initial poses provided by COLMAP are not perfectly accurate (e.g., HyperNeRF dataset), our camera pose refinement module can adaptively adjust these poses, ensuring high-quality reconstruction of dynamic scenes. 2. **Use of Optical Flow Estimation Network:** - We recognize that computing optical flow requires sufficient overlap between images. In all the tested scenes, this overlap condition is consistently met between any two frames (see data sampling mechanism in Section 1.1.2). - For cases where there might be significant viewpoint differences in long videos, we recommend segmenting the video and shuffling the frames within each segment to ensure sufficient overlap between adjacent frames in the shuffled sequence. 3. **Canonical Representation of 3DGS and Depth Estimation:** - Similar to our baseline, we first initialize a canonical 3DGS without deformation fields. This is crucial for obtaining scale-consistent depth necessary for subsequent camera flow computation. Poor Gaussian initialization might be a challenge of our method, as seen in the failure case in DyNeRF scenes. ## Section 0.2: Related Work on Optical Flow Decoupling To offer a more comprehensive understanding of our approach, we further discuss related work on optical flow decoupling. Utilizing optical flow to provide motion priors for dynamic scenes has been explored in previous works. For instance, Dynamo-Depth[1] synthesizes optical flow from both camera motion and object motion for self-supervised depth estimation. However, methods for dynamic scene reconstruction typically do not explicitly decompose optical flow into camera and object motions. Instead, they focus on identifying pixel correspondences in 3D space, known as scene flow. For example, NSFF[2] combines the predicted scene flow with camera poses to project points onto adjacent frames to compute optical flow, and then uses optical flow priors to supervise the scene flow. Similarly, Dynpoint[3] calculates scene flow priors by estimating depth and optical flow, which are then used to constrain correspondences between adjacent frames in dynamic scenes. Unlike these NeRF-based methods, the correspondence between Gaussian and pixels is a complex many-to-many mapping, making it unable to directly use scene flow for constraints. To address this issue, we propose decoupling 2D motion flow from optical flow to constrain the deformation field. ## Section 0.3: Evaluation of Camera Pose Quality We'd like to further discuss and elaborate on the challenge of evaluating camera pose quality in dynamic scenes. Unlike static scene datasets (e.g., Tanks & Temples) that use COLMAP to obtain the ground truth of camera poses, we assume that COLMAP may not provide accurate poses for dynamic scene datasets. Therefore, we refine the camera poses initialized by COLMAP. In this setting, we lack ground truth for a direct quantitative comparison. Despite this, the Table 5 of our main paper shows that the camera poses refined by our method further boost the rendering quality, demonstrates the accuracy of our refined poses. For further clarity, additional visualizations of pose trajectories are provided in Figure C and its video version are included in our comments to the AC. ## Section 0.4: References Here are references cited during the rebuttal: [1] Dynamo-depth: fixing unsupervised depth estimation for dynamical scenes, NeurIPS 2023 [2] Neural scene flow fields for space-time view synthesis of dynamic scenes, CVPR 2021 [3] Dynpoint: Dynamic neural point for view synthesis, NeurIPS 2023 [4] Gmflow: Learning optical flow via global matching, CVPR 2022 [5] MDFlow: Unsupervised optical flow learning by reliable mutual knowledge distillation, TCSVT 2022 [6] MonoNeRF: learning generalizable NeRFs from monocular videos without camera poses, ICCV 2023 [7] Dynibar: Neural dynamic image-based rendering, CVPR 2023 [8] Gaussianflow: Splatting gaussian dynamics for 4d content creation, arXiv 2024 [9] 3D Gaussian Splatting for Real-Time Radiance Field Rendering, SIGGRAPH 2023 [10] MoDGS: Dynamic Gaussian Splatting from Causually-captured Monocular Videos, arXiv 2024 We appreciate the reviewers’ suggestions, which have guided us in further improving the robustness and clarity of our work. We look forward to any additional feedback and are committed to advancing this research in light of the insightful comments provided. Thank you once again for your time and effort in reviewing our submission. We have also included an attached PDF document with additional details and supporting information for your reference. Pdf: /pdf/291cd953a3cbb4b711b86aeffc32b0459b38e415.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Policy Optimization for Robust Average Reward MDPs
Accept (poster)
Summary: The authors introduce study a policy gradient algorithm for solving unichain average reward robust MDPs. They show a linear convergence rate for increasing step sizes and a $O(1/k)$ convergence rate for fixed step size, where $k$ is the number of iterations of the algorithms. Strengths: This is a good paper. It is well-written and defines clearly its objectives. It extends to the case of unichain average reward robust MDPs the current state-of-the-art results for policy gradient methods in discounted robust MDPs and average reward nominal MDPs. Weaknesses: I only have three minor comments as regards the claims of the authors, which I find a bit too far from the exact results proved in their theorems. I think that the paper requires some clarifications. * The authors should mention the unichain assumption much earlier in the paper. The word *unichain* isn’t even present in the abstract, in the introduction, it is only mentioned in the very end of section 2. This assumption is crucial to establish several important properties of average value function (such as uniformity across states) and relative value function (such as continuity). This needs to be emphasized much earlier, and highlighted when the authors give their main contributions. Given the connection between mean-payoff perfect-information stochastic games and robust MDPs [0], the sentence from l13-14 of the abstract: *Moreover, our algorithm is the first algorithm that converges to the global optimum with polynomial time for robust average reward MDPs* would mean that the authors have solved a long-standing open problem, which is not the case since they solely focus on unichain instances (although I appreciate the results of the authors). Please clarify.
 * I am a bit confused with the claim about the polynomial time complexity of the algorithms. Usually, a (weakly) polynomial time complexity refers to a number of arithmetic iterations polynomial in $\log(1/\epsilon)$, in the dimension of the problems and in the logarithm of the entries of the problem (here, the number of bits necessary to represent the rewards and the transition probabilities). So why is $O(1/\epsilon)$ a polynomial-time complexity? Similarly, the $O(.)$ notations seem to be hiding the $C_{PL}$ term from (15), and it’s not clear how this is bounded in terms of $S,A$ and the parameter of the problem. Same with the constant $L_{\pi}$ from (16), can you provide a bound on this? The closed statement from polynomial-time complexity appears to be Th 4.6 but there again we have terms like $M$ appearing in the bound, and it’s not clear how to bound them. Please clarify or reformulate your claims about *polynomial-time* algorithms. * Some of the definitions appear incorrect. In all generality the definition of the bias should involve some Cesaro averages or the sum may diverge, the definition given by the authors is correct only for aperiodic Markov chains (see my other comments below). [0] Chatterjee, K., Goharshady, E. K., Karrabi, M., Novotný, P., & Žikelić, Đ. (2023). Solving long-run average reward robust mdps via stochastic games. arXiv preprint arXiv:2312.13912. Technical Quality: 3 Clarity: 3 Questions for Authors: * The authors in [1] mention a $O(1/\epsilon)$ convergence rate for their algorithm in their conclusion. How does that compare to your $O(1/\epsilon)$ result? * Under the unichain assumption, it is well-known that discounted MDP with a discount factor sufficiently large can solve average reward MDPs (typically with a discount factor larger than $1-\epsilon/D$ where $D$ is the diameter of the MDP). There is a vast number of recent papers on policy gradient methods for nominal discounted MDPs and discounted robust MDPs. Can the authors compare the complexity of their algorithms with the complexity of the best policy gradient methods for discounted MDPs combined with the appropriate (large) discount factor for unichain MDPs? * I understand that the robust value function may be non-smooth for robust MDPs. But isn’t it also the case in the discounted setting? How do authors in the discounted setting bypass that for designing policy gradient algorithms? Is it the same `trick' in your paper? Also, for what geometry of the uncertainty set is the robust value function smooth? * l159-160, definition of the bias: in all generality the definition of the bias/relative value function should involve the Cesaro average, see p 338 in [2]. In my understanding the definition given by the authors only holds fors for *aperiodic* Markov chains. Please clarify. * A minor comment: Usually, rewards are maximized, and costs are minimized. In this paper, rewards are minimized, which is a bit weird, even though the maths don’t change. * Please introduce the equations for $d^{\pi}$ in the main body. l287: typo, the first two sentences of rmk 4.8 should be one sentence. [1] A. Tewari and P. L. Bartlett. Bounded parameter markov decision processes with average reward criterion. In International Conference on Computational Learning Theory, pages 263–277. Springer, 2007. [2] M. L. Puterman. Markov decision processes: Discrete stochastic dynamic programming, 1994 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No problem here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the helpful and insightful feedback. Below we provide point-to-point responses to the weaknesses and questions. **W1. Present assumption 2.1 earlier.** In the revision, we will introduce the unichain assumption earlier in abstract and emphasize that our contributions are based on the unichain assumption. **W2. Polynomial time complexity of the algorithm.** Since the complexity of our algorithm is in the order of $\mathcal{O}(1/\epsilon)$ not $\mathcal{O}(\log(1/\epsilon))$, the polynomial-time complexity claim is not accurate. We will reformulate our claims in the revision. Though our algorithm doesn't have polynomial-time complexity, our algorithm is still the first algorithm that has finite time complexity for robust average reward MDPs. Since $M$ and $C_{PL}$ depend on the structure of the underlying MDPs, characterizing the upper bound of $M$ and $C_{PL}$ is non-trivial. We would like to emphasize that similar constants also appear in prior works on robust discounted reward MDPs and average reward MDPs ([3], [4], [5], [6]). For the constant $L_\pi$, when the underlying MDP is ergodic, the upper bound of $L_\pi$ is characterized in [6], which is also depends on the structure of the MDP. From the Lemma 14 in [6], we have that $L_\pi$ is in the order of $\mathcal{O}\Big(\frac{C_e^2S^2}{(1-\lambda)^2}\sqrt{A}\Big)$, where $C_e$ and $\lambda$ are constants that characterize the geometric ergodicity of the MDPs. **W3. Clarify definitions of average reward.** In [2], the Cesaro limit sense relative value function was defined for periodic chain. This is because the Markov chain is not assumed to be unichain in [2]. When the Markov chain is unichain, the Markov chain only contains a single recurrent class and some transient states, and the average reward is independent of the initial state. Since there is only one recurrent class, the relative value function does not need to involve the Cesaro average. **Q1. Compare with an existing work with $\mathcal{O}(1/\epsilon)$ convergence rate.** In [1], the $\mathcal{O}(1/\epsilon)$ convergence rate was achieved for robust average reward MDPs when the uncertainty set is a bounded parameter MDP. Specifically, for each state action pair $(s, a)$, the transition probability $p_s^a(s^\prime)$ is bounded between a lower bound $l_s^a(s^\prime)$ and an upper bound $u_s^a(s^\prime)$. In our paper, we consider more general uncertainty sets. Moreover, [1] studies the convergence rate by building the connection between discounted reward MDPs and average reward MDPs while our paper focuses on policy gradient-based method. We discussed [1] in the related works section in our paper. We will also compare the $\mathcal{O}(1/\epsilon)$ convergence rate in the revision. **Q2. Compare with policy gradient method for discounted MDPs with large discount factor.** There are recently works on policy gradient methods for robust discounted MDPs. In [3], the robust MDPs are considered under the $R$-contamination uncertainty set. The robust policy gradient converges to the global optimum with iteration complexity $\mathcal{O}(1/\epsilon^3)$. In [4], the double-loop robust policy gradient was proposed and was further proved to converge to the global optimum with complexity $\mathcal{O}(1/\epsilon^4)$. In [6], with sufficiently large step size, the robust policy mirror descent converges to the global optimum with complexity $\mathcal{O}(1/\epsilon)$. To the best of the author's knowledge, there is no works building the connection between the robust discounted reward and robust average reward. Therefore, it's non-trivial to compare the complexity of our algorithm with the policy gradient methods for discounted MDPs combined with the large discount factor. The authors of [7] shows that there exists a discount factor $\gamma_{bw}\in(0, 1)$ such that any policy that is $\gamma$-discount optimal for $\gamma\in(\gamma_{bw}, 1)$ is also Blackwell optimal when the uncertainty set is definable. However, how to choose $\gamma_{bw}$ is still an open problem. In [7], only value iteration based algorithms are proposed and no stopping criterion is introduced for their algorithms. **Q3. Bypass non-smoothness of robust value function in discounted MDPs.** For discounted reward MDPs, different methods are proposed to bypass the non-smoothness of the robust value function. In [3], the total-variation uncertainty set is studied and the non-smoothed robust value function is smoothed by approximating the max operator using the LogSumExp (LSE) operator. In [4], the non-smoothness of the robust value function is bypassed by introducing the Moreau envelope function, which can also be viewed as a smoothed version of the robust value function. In [5], the authors develop a similar bound as the Lemma 4.1 in our paper by applying the Bellman equation of robust value function, and further apply it to show the convergence of the mirror descent algorithm. However, it is not applicable in our case as the average reward doesn't satisfy the Bellman equation. A sufficient condition for the robust value function to be smooth is that there exists a constant $C$ such that $\|d\_s\^{\pi, \mathsf{P}\_{\pi}} - d\_s\^{\pi\^\prime, \mathsf{P}\_{\pi\^\prime}}\|\leq C\\|\pi-\pi\^\prime\\|$, which depends on the geometry of the uncertainty set. Since it involves the worst-case transition kernel and the closed-form expression doesn't exist, it unclear for what geometry of the uncertainty set is the robust value function smooth. **Q4. Clarify the definition of relative value function.** Please refer to the response of W3 **Q5. Average cost.** In the revision, we will replace rewards with costs. **Q6. Introduce the equations for $d^\pi$.** We provide the equations for $d^\pi_\mathsf{P}$ in the revision: $d^\pi_\mathsf{P}$ is the stationary measure of $s$ under the transition kernel $\mathsf{P}$ satisfies that $d^\pi_\mathsf{P} \mathsf{P} = d^\pi_\mathsf{P}$. **Q7. Typo.** Fixed. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I would like to thank the authors for their (very!) detailed response to my comments. Provided that they indeed make all the changes listed above, the paper will read better and contribute to the literature on gradient methods for robust MDPs. I increased my score in consequence. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the response and helpful comments, which greatly helped to improve the quality of the paper.
Summary: The paper studies gradient-based methods for robust average reward MDPs. The paper first derives a sub-gradient for the robust average reward (which is nonsmooth), and then uses it to define a mirror descent algorithm. They prove a few structural properties of the setting and then use them to provide a convergence guarantee for their algorithm. They also provide some experimental results. Strengths: Overall the paper and proofs are written with high clarity and quality, but I do have several technical concerns (in the questions section). I would raise my soundness score and overall evaluation if they were all adequately addressed. There is good originality in the problem setting and the fact that they consider gradient-based methods. In particular it requires surmounting certain technical difficulties related to the robust average reward. If the technical concerns can be addressed, then I think these theoretical developments could prove to have good significance to future work. Weaknesses: The main weakness of the paper is what I perceive to be several technical concerns (listed in the questions section). I would raise my soundness score and overall evaluation if they were all adequately addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: Maybe the unichain assumption 2.1 should be placed earlier, since the stationary distributions $d^\pi$ are not generally well-defined in the absence of this assumption. In line 177, is the definition of $d_{\mathcal{P}}^\pi$ unique? Lemma 3.2: similar question, I think the worst-case transition kernel for a policy $\pi$ is not unique. In this case I wonder if $d_{\mathcal{P}}^\pi$ and $Q_{\mathcal{P}}^\pi$ are uniquely defined? And thus my main question is, for the sub-gradient, can we use different worst-case transition kernels in $d_{\mathcal{P}}^\pi$ and $Q_{\mathcal{P}}^\pi$, or do they need to use the same one? (The proof makes it seem like they are using the same one?) Thus maybe the lemma statement would need to be amended to discuss this. Lemma 4.2: Is $C_{PL}$ guaranteed to be finite? The usual definition of unichain (e.g. in Puterman) is that the transition matrix has a single recurrent class plus a (possibly empty) set of transient states. The transient states would have stationary measure 0 which could cause $C_{PL}$ to blow up? I have a similar concern about the quantity $M$ in Lemma 4.5. I am also confused about the denominator (is there a missing subscript $\pi$?) Where is the proof of Lemma 4.3? It is claimed that a result from [16] can be easily extended to a more general (unichain) setting; however, because such details can be very thorny in the average reward setting, I think a proof should be provided. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No major limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Listed in the question section.** Please refer to the response of Q1-Q6. **Q1. Unichain assumption should be placed earlier.** We thank the reviewer for this comment. In the revision, we will introduce the unichain assumption earlier in the paper and emphasize that our contributions are based on the unichain assumption. **Q2. Uniqueness of $d_\mathcal{P}^\pi$.** The worst-case transition kernel may not be unique for general uncertainty sets, and thus $d_\mathcal{P}^\pi$ may not be unique. **Q3. Uniqueness of $d_\mathcal{P}^\pi$ and $Q_\mathcal{P}^\pi$.** Though the worst-case transition kernel may not be unique, $Q\_\mathcal{P}\^\pi$ is unique. From the definition of the worst-case transition kernel, for a worst-case transition kernel $\mathsf{P}\_\pi$ of a policy $\pi$, we have that $Q\_{\mathsf{P}\_\pi}\^\pi = \max\_{\mathsf{P}\in\mathcal{P}} Q\_{\mathsf{P}}\^\pi$. Therefore, the robust action value function is unique. For the sub-gradient, the worst-case transition kernel of $d\_\mathcal{P}\^\pi$ and $Q\_\mathcal{P}\^\pi$ can be different, but this does not affect our results. **Q4. Boundedness of $C_{PL}$.** We agree with the reviewer that in the unichain setting, $C_{PL}$ might not be finite if transient state exists. The constant $C_{PL}$ depends on the structure of the underlying MDPs and also appears in prior works on discounted reward MDPs and non-robust average reward MDPs ([1], [2], [3], [4]). If the Markov chain is ergodic, i.e., there is only a single recurrent class, then $C_{PL}$ is guaranteed to be finite. **Q5. Boundedness of $M$.** Similar as $C_{PL}$, the constant $C_{PL}$ depends on the structure of the underlying MDPs. If the Markov chain is ergodic, then $M$ is guaranteed to be finite. The definition of $M$ is our paper is accurate. For the denominator, the subscript $\mathsf{P}$ denotes the transition kernel that we maximize. In the numerator, the subscript $\mathsf{P}_\pi$ denotes the worst-case transition kernel of $\pi$. **Q6. Proof of Lemma 4.4.** In Lemma 14 of [4], the $L_\pi$-Lipschitz of the relative value function is proved. In their paper, $L_\pi = 2C_m^2C_p\kappa_r + 2C_mC_r$. For $C_p$ and $\kappa_r$, the proof of Lemma 18 in [4] shows that the ergodicity of the Markov chain is not required and the results can be extended to the unichain setting. We only need to show that $C_m$ is finite for unchain. In [4], $C_m$ is defined as the maximum of the operator norm of the matrix $(I-\Phi P_\pi)^{-1}$ across all policies $\pi\in\Pi$. For any $\pi\in\Pi$, following the proofs in Section A.2 of [4], we have that the eigenvalues of $(I-\Phi P_\pi)$ is non-zero for unchain since unchain only has a single recurrent class and the stationary distribution exists. Therefore, for any $\pi\in\Pi$, $\|(I-\Phi P_\pi)^{-1}\| < \infty$. Moreover, since the collection of all policy $\Pi$ is compact, we have that the maximum of the operator norm of the matrix $(I-\Phi P_\pi)^{-1}$ across all policies $\pi\in\Pi$ is finite. Therefore, $L_\pi$ exists for unchains. We will add the proof sketch of Lemma 4.4 in the revision. **Reference.** [1] Y. Wang and S. Zou. Policy gradient method for robust reinforcement learning. In Proc. International Conference on Machine Learning (ICML), volume 162, pages 23484–23526. PMLR, 2022. [2] Q. Wang, C. P. Ho, and M. Petrik. Policy gradient in robust mdps with global convergence415 guarantee, 2023. [3] Y. Li, G. Lan, and T. Zhao. First-order policy optimization for robust markov decision process. arXiv preprint arXiv:2209.10579, 2022. [4] N. Kumar, Y. Murthy, I. Shufaro, K. Y. Levy, R. Srikant, and S. Mannor. On the global convergence of policy gradient in average reward markov decision processes. arXiv preprint arXiv:2403.06806, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response. From this response it seems like therefore the results are based on the ergodic (one recurrent class with no transient states) assumption, rather than unichain? If so, I would be happy to increase my score if this is presented earlier and more centrally in the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. After reading the reviewer's comments, we found that assuming ergodicity instead of unchain is more reasonable. In this case, we don't need to make additional assumptions on $C_{PL}$ and $M$. We will present the assumption earlier in the paper and make it clear.
Summary: The authors present a gradient-based algorithm for average reward robust MDP (finite MDP). This setting has been studied in prior works, however, this paper proposes a policy optimization-based algorithm which is not yet done. They do a theoretical analysis of this setting and show linear convergence (by increasing step size), finite iteration complexity and quite nicely show satisfying PL condition and therefore perhaps trivially global convergence. Strengths: - The paper is easy to read. - Policy gradient-based approach for robust average rewards MDP - There are multiple theoretical contributions: Linear convergence, Global optimality, derived sub-gradient of robust average reward MDPs Weaknesses: - Presentation of theoretical results is not sharp or rather quite bad. They do not mention the assumptions for the lemma/theorem statement holds. I am uncertain if there are any further assumptions. - Much of the used theory is already exists in the literature of robust MDP and infinite horizon case (which is not bad since authors make sure to explain that they not trivially using but extending it, e.g., 197-204 lines). This is currently described in pieces throughout the paper, I would appreciate it if authors could concretely explain at one place (perhaps in related works) the tools used from prior works. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: 43-44 -> What are these algorithms based upon in contrast to gradient-based? Is there an assumption on eqn 3 about boundedness? Why does V and Q are necessarily finite? What will be a sufficient condition for assumption 2.1? Also motivating beforehand on why you need this assumption will be good. Is there a bound/relation between C_{PL} and M? What does the non-robust method line 307 optimize for? what is it's objective and used transition? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the helpful and insightful feedback. Below we provide point-to-point responses to the weaknesses and questions. **W1. Didn't mention the assumptions for the lemma/theorem statements.** We thank the reviewer for this comment. In the revision, we will rephrase the statements of lemmas and theorems to include the assumptions we use. Specifically, Theorem 4.6: Under Assumption 2.1, set the step size $\eta_k \geq \eta_{k-1}\Big(1-\frac{1}{M}\Big)^{-1}M$, the robust policy mirror descent satisfies $g\_\mathcal{P}\^{\pi\_k} - g\_\mathcal{P}\^\* \leq (1-\frac{1}{M})\^k(g\_\mathcal{P}^{\pi\_0} - g\_\mathcal{P}\^\*) + (1-\frac{1}{M})\^{k-1}\frac{1}{M\eta\_0}\mathbb{E}\_{s\sim d\_{\mathsf{P}\_{\pi\_0}}\^{\pi\^\*}}[\|\pi\^\*(\cdot|s) - \pi\_0(\cdot|s)\|\^2].$ For Theorem 4.7: Under Assumption 2.1, let step size $\eta = \frac{1}{L_\pi}$ for all $k\geq 1$, we have that the each iteration of the robust policy mirror descent satisfies $g\_\mathcal{P}\^{\pi\_k} - g\_\mathcal{P}\^\* \leq \max\Big\\{\frac{4L_\pi}{\omega k},(\frac{\sqrt{2}}{2})^k\big(g_\mathcal{P}^{\pi_0} - g_\mathcal{P}^*\big)\Big\\}$, where $\omega = (2\sqrt{2S}C_{PL})^{-2}$. **W2. Explain the tools used from prior works at one place.** We thank the reviewer for this comment. We summarize the results that are used from prior works in Section 1.2. We will add the following paragraph in the revision: In [1], the Fr\'echet sub-gradient has been derived for robust discounted reward MDPs. However, the Fr\'echet sub-gradient for robust discounted reward MDPs in [1] can not be extended to the average setting since in [1], the discounted factor $\gamma$ is required to be strictly less than 1. In [2], the performance difference lemme was derived for average reward MDPs. In our paper, we provide a lower bound on the difference of robust average reward between two policies. Such a bound was also derived in [1] for robust discounted value function by applying the Bellman equation of robust value function, which is not applicable in our case as the average reward itself doesn’t satisfy the Bellman equation. We also extend the Lipschitz property of the non-robust relative value function in [3] to the unichain setting. In [4] and [5], the convergence rate of the policy gradient descent for non-robust discounted reward MDPs was derived. In their analyses, the smoothness of the value function is required. However, the robust average reward might not be smooth. Therefore, the approaches applied in [4] and [5] can not be directly extended to robust average reward MDPs. **Q1. Details on related works.** [6] and [10] build the connection between the discounted reward MDPs and average reward MDPs and prove the existence of Blackwell optimal policies. The algorithms based on value iteration are further proposed. [7] is an extension of [6] with parts of the state space having arbitrary transitions and other parts are purely stochastic. In [8] and [9], the model-based and model-free robust average reward MDPs are studied and the robust relative value iteration algorithms are proposed. We will add this discussion in the revision **Q2. Boundedness assumption on V and Q.** We thank the reviewer for pointing this out. Since $V$ and $Q$ are unique up to an additive constant, similar as [3], we consider the projection of the value function onto the subspace orthogonal to the $\textbf{1}$ vector. In this case, our theoretical results still hold. **Q3. Sufficient condition and motivation for assumption 2.1.** The sufficient condition for Assumption 2.1 is that for any $\pi\in\Pi$ and $\mathsf{P}\in \mathcal{P}$, the Markov chain only contains a single recurrent class and some transient states. The unichian assumption is important and widely used since under the unichain assumption, the robust average reward is identical for every starting state. To derive Lemma 4.1, we leverage the fact that the robust average reward is independent of the initial state. If the unichain assumption doesn't hold, the inequality may not hold, and this is a key step to derive the global optimality of our algorithm. In practice, it is often the case that only unichains are of interest to the decision-making problem. For example, the nominal transition kernel is obtained from samples, and is a unichain. Then, the true transition kernel must be a unichain, and therefore to obtain a robust policy that works well on the true transition kernel, it is sufficient to only consider unichain. Even in standard MDP, extending results from unichain to e.g., multichain problems is very challenging. Extending our results under a relaxed assumption beyond unichain is even more challenging, and it is of future interest. **Q4. Relation between $C_{PL}$ and $M$.** The relation between $C_{PL}$ and $M$ is unclear. We have that $M = \sup\_{\pi, \mathsf{P}\in\mathcal{P}}\Big\\|\frac{d\_{\mathsf{P}\_\pi}\^{\pi\^\*}}{d\_\mathsf{P}\^\pi}\Big\\|\_\infty = \sup\_{\pi, s, \mathsf{P}\in\mathcal{P}} \frac{d\_{\mathsf{P}\_\pi}\^{\pi^*}(s)}{d\_\mathsf{P}\^\pi(s)}$ and $C\_{PL} = \max\_{\pi, s} \frac{d\_{\mathcal{P}}\^{\pi\^\*}(s)}{d\_\mathcal{P}\^\pi(s)} = \max\_{\pi, s} \frac{d\_{\mathsf{P}\_{\pi\^\*}}\^{\pi\^\*}(s)}{d\_{\mathsf{P}\_{\pi}}\^\pi(s)}$. The sup of $M$ and $C_{PL}$ can be achieved at different $\pi$. In the numerator, it's unclear if $d\_{\mathsf{P}\_\pi}\^{\pi\^\*}(s)$ is larger than $d\_{\mathsf{P}\_{\pi\^\*}}\^{\pi\^\*}(s)$ or not. **Q5. Objective and transition kernel of baseline algorithm.** For the baseline algorithm, it's trained under the known nominal transition kernel and the objective is to maximize the non-robust average reward. --- Rebuttal 2: Comment: I thank the authors for their replies. I went over them and do not have any further questions. Please include the changes as promised in your revised version, particularly regarding assumptions. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for the response. We will revise our paper based on the comments.
Summary: The authors consider the mirror descent algorithm in the context of robust average cost MDPs. The consider $(s,a)$-rectangular uncertainty sets across a general distance metric. The Bregman divergence chosen for the purpose of analysis is the Euclidean 2-norm distance. The authors leverage on a prior result proving the existence of a corresponding Bellman equation for the robust average cost and another prior result proving Lipschitzness of the risk neutral average cost value function to derive the results in the paper. Since the robust average cost is the maximum average cost across the uncertainty set it is not differentiable and hence the authors resort to using the sub-gradient as a substitute. This sub gradient is shown to be the robust $Q$ function. Subsequently, the performance difference inequality in conjunction with a PL result is proven and leveraged to obtain the final convergence bounds which are of $O(1/\epsilon)$. They characterize these bounds for both a constant and increasing step size. Strengths: 1. The problem considered is of significance to the community. Average cost problems are harder to analyze than their discounted cost counterparts due to the absence of the contraction factor. Hence even though its a technical challenge, this objective is more representative of applications where long term performance is important. And the robust average cost objective is important to ensure optimality in the face of uncertainty associated with the underlying transition kernel. 2. Even though the analysis techniques are inspired by prior literature such as Agarwal et al, Xiao, it is not a straight forward extension and hence this paper provides with valuable tools in this context such as the performance difference inequality, PL inequality, etc which may be of independent interest to the community. 3. The paper yields $O(1/\epsilon)$ convergence bounds which are optimal even when compared to the risk neutral average cost setting/discounted cost setting. 4. The results are also experimentally validated. Weaknesses: 1. The authors seem to have missed a line of work pertaining to the robust average cost. Although they consider general $(s,a)$-rectangular sets with any distance metric, much of the prior work has considered the KL (Kullback Leibler) distance metric whose dual formulation yields the exponential cost robust MDPs. Some of the earlier works on this include [Borkar 2010], [BS02] where they characterize asymptotic behavior of dynamic programming algorithms for this robust formulation. Some of the more recent work in this domain can be found in [MMS23] and [MMS24]. [MMS23] considers a policy gradient algorithm for this robust setting, but provides asymptotic convergence bounds to a stationary point. [MMS24] considers modified policy iteration, an algorithm that generalizes both policy and value iteration and provides with finite time global convergence guarantees. Hence the repeated claim in the paper to have characterized the first policy based global convergence bounds in the context of robust average cost MDPs has to be modified accordingly. 2. The authors assume an oracle provides with $Q^\pi_\mathcal{P}$, that is the value function corresponding to the argmax kernel associated with policy $\pi$. In the risk neutral case, one can solve for the state action value function provided corresponding to a policy $\pi$ by solving for the average cost Bellman equation (which typically involves a matrix inversion). It is not clear as to how to evaluate this robust value function even when you have access to the model, that is the nominal transition matrix and the single step costs. 3. The authors assume that the underlying Markov chain is unichain. However, unichain Markov chain implies the existence of some transient states whose associated stationary measure is zero. In that case, the concentrability coefficient $C_{PL}$ will be infinity. Hence it seems like ergodicity is a requirement. 4. The increasing step size requires knowledge of $M$, which is not apparent even in the full information setting. For instance, reference [16] in the paper characterizes the Lipschitz constant in terms of the model considered and hence can be evaluated when given access to the complete model information such as the probability transition matrix, etc. However, it is not apparent as to how to determine M based on its definition in line 254. [Borkar 2010] Borkar, V. S. (2010, July). Learning algorithms for risk-sensitive control. In Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems–MTNS (Vol. 5, No. 9). [BS02] Borkar, V. S., & Meyn, S. P. (2002). Risk-sensitive optimal control for Markov decision processes with monotone cost. Mathematics of Operations Research, 27(1), 192-209. [MMS24] Moharrami, M., Murthy, Y., Roy, A., & Srikant, R. (2024). A policy gradient algorithm for the risk-sensitive exponential cost MDP. Mathematics of Operations Research. [MMS23] Murthy, Y., Moharrami, M., & Srikant, R. (2023, June). Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs. In Learning for Dynamics and Control Conference (pp. 395-406). PMLR. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Is there a closed form expression for the value function corresponding to the solution of Equation 7? In exponential cost MDPs, these value functions do indeed have a closed form expression and I was curious about their generalizations to other distance metrics. 2. Since the Bregman divergence considered is the Euclidean 2 norm, it feels more accurate to refer to the algorithm as policy gradient rather than mirror descent. Mirror descent is more commonly employed when the KL divergence is used as the Bregman divergence. Is there any specific reason for not referring to this algorithm as policy gradient? 3. Since the minimization problem is considered, it is more relevant to refer to the setting as average cost rather than average reward as is currently done. Some typos: 1. Line 463.5 in the appendix: should be $+\sum_a\pi(a|s)$ instead of $-\sum_a\pi(a|s)$ 2. In equation 10: should be $\nabla g^{\pi_k}_\mathcal{P}$ instead of $\nabla g^{\pi}_\mathcal{P}$ Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Even though the limitations are not explicitly addressed, the paper has been presented clearly and hence the limitations are apparent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the helpful and insightful feedback. Below we provide point-to-point responses to the weaknesses and questions. **W1. Related works on robust average cost MDPs.** We thank the reviewer for pointing this out. In the revision, we modify our statement as 'We characterized the first policy based global convergence bounds with general uncertainty sets for robust average cost MDPs.' We also compare our work with the related works [Borkar 2010], [BS02], [MMS24], [MMS23]. We add the following paragraph in Section 1.2 in the revision: **Exponential cost robust MDPs.** For the robust average cost MDPs, when the uncertainty set is defined by the KL-divergence metric, the problem admits a dual formulation, which is the exponential cost robust MDPs. The exponential cost robust MDPs have also been studied in the literature. In [Borkar 2010], the Q-learning and the actor-critic method are described and the asymptotic performance are characterized for risk sensitive robust MDPs. In [BS02], the value iteration and policy iteration algorithms are also analyzed for risk sensitive MDPs. Recently, in [MMS23], the modified policy iteration is proved to converge to the global optimum for exponential cost risk sensitive MDPs. The policy gradient algorithm for the risk sensitive exponential cost MDPs is studied and the asymptotic convergence bounds to a stationary point are provided in [MMS24]. In our paper, we study the robust average reward MDPs with general uncertainty sets and characterize the global convergence of our algorithm. **W2. Evaluate the robust relative value function.** The robust relative value function can be evaluated by generalizing the relative value iteration to the robust setting. The algorithm is presented as follows: > Algorithm parameters: $V_0$, $\epsilon$, and reference state $s^* \in \mathcal{S}$ Initialize $w_0 \leftarrow V_0 - V_0(s^*)\textbf{1}$ > > Loop for each step: $sp(w_t - w_{t-1})\geq \epsilon$ $\quad$Loop for each $s \in \mathcal{S}$: $\qquad V\_{t+1} \leftarrow \mathbb{E}\_{a\sim\pi}[r(s, a) + \sigma\_{\mathcal{P}\_s\^a}(w\_t)]$ $\qquad w\_{t+1}(s) \leftarrow V\_{t+1}(s) - V\_{t+1}(s^*)$ return $V_t$ In the algorithm, $\\sigma\_{\\mathcal{P}\_s\^a}(w\_t) = \\max_{p\\in\\mathcal{P}\_s\^a}p\\top w\_t$, and can be solved in the dual formulation for various uncertainty sets [Iyengar05]. The returned $V\_t$ is the robust relative value function and $V\_t(s\^*)$ is the robust average reward. The assumption that an oracle that outputs the robust value function exists is also presented in robust discounted reward MDPs, for example [Li22]. **W3. Unichain assumption may lead to infinite $C_{PL}$.** We agree with the reviewer that the unichain assumption may not guarantee that $C_{PL}$ is finite, and moreover, ergodicity is a sufficient condition to guarantee $C_{PL}$ is finite. **W4. Determine the increasing step size $M$.** Since the definition of $M$ involves taking the sup over all transition kernels in the uncertainty set, theoretically determining $M$ is challenging. In practice, we only require an upper bound for $M$ so that the convergence result holds in Theorem 4.6. Therefore, in practice, we can run simulation to find an upper bound of $M$ and fine tune it. For example, we can numerically solve $\min_{\pi, P\in\mathcal{P}}\\|d_P^\pi\\|_\infty$ when the ergodicity holds, which is an upper bound of $M$. **Q1. Closed from expression for the value function.** For the widely used metrics like the total variation distance, $\chi$ -square distance and the KL divergence, the closed-form expression exists for $\max\_{\mathsf{P}\in\mathcal{P}}\sum\_{s\^\prime \in \mathcal{S}}\mathsf{P}\_{s, s\^\prime}\^aV\_{\mathcal{P}}\^\pi(s\^\prime)$ [Iyengar05]. The dual formulation depends on the structure of the uncertainty set. However, if the value function has closed-form solution as the exponential cost MDPs is unclear. Moreover, our algorithm and theoretical results do not depend on the specific structure of the uncertainty set and the closed-form solution of the value function. **Q2. Policy gradient or mirror descent.** We don't refer to our algorithm as policy gradient is due to the fact that in our algorithm, we replace the policy (sub)-gradient $\nabla g_\mathcal{P}^{\pi_k}$ by $Q_\mathcal{P}^{\pi_k}$. Since our algorithm is the mirror descent with a specific Bregman divergence and $Q_\mathcal{P}^{\pi_k}$ is not the (sub)-gradient, we refer to our algorithm as policy mirror descent. **Q3. Average cost MDPs.** We thank the reviewer for this comment. In our paper, we adopt a minimization formulation to align with conventions in the optimization literature. In the revision, we will replace rewards with costs. **Q4. Typos.** We thank the reviewer for pointing this out. Fixed. **Reference.** [Borkar 2010] Borkar, V. S. (2010, July). Learning algorithms for risk-sensitive control. In Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems–MTNS (Vol. 5, No. 9). [BS02] Borkar, V. S., Meyn, S. P. (2002). Risk-sensitive optimal control for Markov decision processes with monotone cost. Mathematics of Operations Research, 27(1), 192-209. [MMS24] Moharrami, M., Murthy, Y., Roy, A., Srikant, R. (2024). A policy gradient algorithm for the risk-sensitive exponential cost MDP. Mathematics of Operations Research. [MMS23] Murthy, Y., Moharrami, M., Srikant, R. (2023, June). Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs. In Learning for Dynamics and Control Conference (pp. 395-406). PMLR. [Iyengar05] Garud N. Iyengar (2005). Robust Dynamic Programming. Mathematics of Operations Research. [Li22] Yan Li, Guanghui Lan, Tuo Zhao. (2022). First-order Policy Optimization for Robust Markov Decision Process. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my comments! I am satisfied with the responses and am increasing my score. I am still not convinced that the algorithm should be referred to as policy mirror descent since the Euclidean norm as Bregman divergence yields policy gradient. I understand the need to use the sub gradient (due the maximization objective), but the divergence considered is however the Euclidean norm and hence policy gradient (or sub-gradient) is perhaps more accurate than mirror descent. Once again, thank you for your response!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Discrete Latent Variable Structures with Tensor Rank Conditions
Accept (poster)
Summary: This study develops new methods for identifying causal structures in discrete latent variable models using rank constrain. By leveraging this nontrivial algebraic property, the authors propose criteria and algorithms for discovering hidden causal relationships. They validate their approach through simulations. Strengths: 1. As far as I know, the tensor rank condition is a novel contribution. Unlike previous on rank deficiency, this method can handle non linear relations. 2. The experiments are done on both simulated and real-world dataset. Weaknesses: I wonder if some assumptions are a bit strong such as the faithfulness assumption and the three pure child assumption and I wonder how they play into practical scenarios. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How does this paper compare to works like [1] where rank constraint is also used in a non-linear setting? [1] Kong, Lingjing, et al. "Learning Discrete Concepts in Latent Hierarchical Models." *arXiv preprint arXiv:2406.00519* (2024). Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your careful review and suggestions and would like to thank you for your positive assessment of our work. >Q1: I wonder if some assumptions are a bit strong such as the faithfulness assumption and the three pure child assumption and I wonder how they play into practical scenarios. **A1**: We would like to mention that to render the identifiability of the latent variable structure, we introduce the assumptions of the three pure children and faithfulness. Regarding the three-pure child assumption, it is a commonly used condition, well-studied in both linear and discrete models, as demonstrated by Silva et al., 2006, Kummerfeld and Ramsey, 2016, and Chen et al., 2024 (more related works can be found in the introduction of our manuscript). It is worth noting that the pure child assumption is often valid when data is collected through questionnaires, which is a common practice in fields such as social science, psychology, and medical science. For a more detailed discussion of the three-pure children assumption, please refer to the general response. Regarding the faithfulness assumption, it plays a significant role in constraint-based methods, ensuring that the learned structure accurately represents a causal graph. It is considered a standard assumption, which is known to hold in simple systems such as linear Gaussian models [Meek (1995)]. This concept has been extensively discussed in the literature [Spirtes et al., 2000; Spirtes and Glymour, 1991]. We appreciate your comments and will discuss these assumptions further in the revision. >Q2: How does this paper compare to works like [1] where rank constraint is also used in a non-linear setting? **A2**: We would like to clarify that there are two main differences compared to [1]. First, the tensor rank condition in our work can capture more d-separation information among observed variable sets. For example, the graphical implications of tensor rank allow us to infer the d-separation among any pair of variables within the variable set $\mathbf{X}_p$, whereas the rank of the probability table only infers the d-separation between two variables (or vectors). In particular, when only two-way tensors are considered, [1] can be seen as a specific case of our approach. Second, in [1], the observed variables are continuous, making the recovery of the distribution of discrete variables untestable. Additionally, the model in [1] does not allow for 'triangle structures' among the latent variables. In contrast, our work does not require the recovery of the distribution of discrete variables and allows for any dependence among latent variables. Furthermore, [1] assumes that the cardinality of latent support is the same, whereas our work allows for different cardinalities of latent support (see Appendix F). Reference: [1] Kong, Lingjing, et al. "Learning Discrete Concepts in Latent Hierarchical Models." arXiv preprint arXiv:2406.00519 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I will keep my score leaning towards acceptance. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: Dear Reviewer Upr4, Thank you for your positive comment. We believe our paper is improved as a direct result of your comments. Sincerely, Authors
Summary: This paper aims to learn latent causal models with discrete random variables. To this end, a tensor rank condition on contingency tables of observed variables is used. Specifically, the paper establishes connections between the d-separation of the observed variables and the “tensor rank” of the said random variables. Subsequently, this property is leveraged to construct an algorithm that learns (i) the measurement model, that is the connections from the latent variables to observed variables, and (ii) the structure model (the edges between the latent variables) up to Markov equivalence class. The main structural assumptions for the results are purity (no edges between the observed variables) and three observed children per latent node, and other standard causal assumptions, e.g., faithfulness, are adopted in the paper. Strengths: The literature mostly focuses on learning linear latent models. Instead, this paper aims to address discrete latent models for which the existing work is scarce (the paper outlines that they are limited to very restrictive structures, e.g., trees). The whole paper builds on making a nice observation: the tensor rank of the contingency tables of the joint dist. of the observed variables are connected to the support of the variables that d-separate them. This observation is substantiated in the main result Theorem 3.3. All the subsequent results (mainly the algorithms) build on this main observation, which leads to rather simple proofs. I find this simplicity as a strength. The practical implementation of the proposed method is also clearly explained using existing techniques to test the tensor rank. Weaknesses: - *Structural limitations are quite restrictive*. Specifically, the purity assumption that there are no edges between the observed variables is a serious limitation. On one hand, this is understandable as the existing works usually have similar or stronger restrictions. On the other hand, having partial results for the cases where this assumption is violated would have made the paper significantly stronger. I see that the paper touches on this via an example and leaves it as future work, nevertheless, it’s a big limitation of the current paper. - On a related note, the assumptions are not convincingly justified. For instance, three-pure children assumption is just taken for granted without proper discussion (I acknowledge the reference to similar work, please see the questions section). - This is relatively a minor concern: I enjoyed the flow of the paper, but the writing could be improved for sure. I elaborate on this in questions. Technical Quality: 3 Clarity: 2 Questions for Authors: **Critical points on assumptions** - *Allowing edges between observed variables*: It seems like extending the results to impure structures would require a different approach. Because the propositions -- the backbone of finding the measurement model -- are very specific to the purity assumption. In the discussion section, an example is provided and the extension is left for future work. I think you can be more open about it, e.g., what are the challenges, what are the missing pieces? For instance, in the example in Section 7, are you also able to determine the lack of edges between any $(X_i,X_j)$ pair except $(X_4,X_5)$? - *Structural constraints in the literature*: I think you need to be more direct and provide exact requirements/structural constraints and results. For instance, if I recall correctly, Cai et al. and Xie et al. require 2 pure children per latent whereas this paper requires 3 pure children (I am aware that those papers are for linear latent models). Can you elaborate on the similarities and differences between the roles of these assumptions? On key difference from related work, L89 says that the are no constraints on the latent structure. Is this difference w.r.t. binary latent variable case (Gu and Dunson (2023) and Gu (2022)) or all the previous work including the linear latent variable models? - In Appendix D, the full rank assumption (Assumption 2.4) discussion is informative and reasonable. However, a similar discussion for the other assumptions is missing. For instance, in the example in Section 7, the possibility of working on an impure structure is explained but failure cases (and why they happen) are not explained. I think it’s equally important to explain those cases. On a related note, the role of 3 pure children assumption is not discussed in the paper (please point out if I missed it). - *Support of latent variables*: The paper says that *”For simplicity, we focus on the case where all latent variables have the same number of categories, with extensions provided in Appendix F”*. Some clarification can be helpful. I skimmed through Appendix F, and it seems like the extension to “latent variables with different numbers of categories” is immediate without extra assumptions. If this is indeed the case, then I suggest the authors explicitly state it in the main paper. If the extensions require more assumptions and/or more sophisticated techniques, again, be more clear about it so the reader can understand if it’s a limitation or not. **Experiments related**: - I see that the definitions of metrics are in Appendix I (which should be referred to in L288 to help the reader). It would have been nicer to squeeze them into the main paper. - Adding a small section for real data applications (Section 6) without sufficient details does not add much value. I think it can be safely moved to the appendix, where the details are given, which would also open space for a more self-contained simulation section. - I think your method should be able to handle an arbitrary number of children as long as there are at least 3. So, you could have tested different sizes for different latent variables instead of a fixed number of 3 or 4. For instance, each latent variable can have at least 3, at most 6 observed variables, which could have allowed you to evaluate whether there is a difference between the performance in learning measurement and structure models. - For such sparse and small graphs, 50k samples are a bit too aggressive. Perhaps starting from 1k would show the limits of the approach and help us to understand the performance better. For instance, at “mismeasurements” metrics, even 5k sample seem to work well, it would have been interesting to see what is the limit. **Relatively minor notes**: - *Definition 2.1 (Discrete LSM)*: Is this a proper name? I don’t like calling the specified model “discrete LSM” simply because it is too general. Assumptions are clearly stated in the definition, but calling the model just discrete LSM, especially under the purity and three-pure child assumptions is not great, especially when we think about the potential future work and positioning of this paper in the literature. - *Theorem 4.7*: Please be careful when using $p$ in notations, $\mathbf{X}_p$ and $\mathbf{L}_p$ notations are a bit confusing. At first reading, $p$ reads as a set, e.g., in Example 4.8, $\mathbf{X}_p$ implies $p=\\{1,10,4,5,7,8\\}$, but $\mathbf{L}_p=\\{L_2,L_3\\}$, so it is not a set notation. - *Conclusion Section 8*: Overall, I liked this conclusion. That being said, being more direct and recalling the main structural assumptions would have been good. For instance, the limitations paragraph in Appendix K is very clean — I appreciate it. Since it’s just 2.5 lines, I’d suggest to include in the main paper **Typos etc.**: - Prop 4.3: I think Rule 2 statement is missing “if”. Also, why $L_i$ instead of $L$? - Theorem 4.7: varaible - Theorem 4.9: TESNOR - L177: missing period - L800: TENSKR - CP decomposition: please spell out at the first appearance - Figure 8: make the markers larger, please. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations are mostly addressed. I suggest the authors to see my comments on questions to further clarify them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and valuable questions and for spending the time and effort on this review. We will respond to these issues point by point. >Q1: Allowing edges between observed variables: ... I think you can be more open about it, e.g., what are the challenges, what are the missing pieces? **A1**: One of the challenges in impure structure is accurately identifying the number of latent variables. As shown in Figure 1 in our attached PDF, the rank constraints over the probability tensor can be equal by the graphical implication of tensor rank, hindering the identification of latent variables. Although we can detect the absence of edges between pairs of variables (as in the structure described in Sec. 7), a careful search strategy is still required, along with proper assumptions on the cardinality of latent support, to distinguish these equivalent classes. We will address this issue in our future work. >Q2: Structural constraints in the literature: I think you need to be more direct and provide exact requirements/structural constraints and results. ... Can you elaborate on the similarities and differences between the roles of these assumptions? **A2**: In the linear non-Gaussian model, Cai et al. and Xie et al. show that the identification of the measurement model only relies on the two-pure children assumption, due to the availability of higher-order statistics from non-Gaussianity. However, in the linear Gaussian model, the three-pure children assumption is necessary, due to the indistinguishable equivalence classes in two-pure condition by Tetrad constraints (as noted by Silva et al. and Kummerfeld et al.). Interestingly, when only four-way tensors are considered in discrete models, the tensor rank condition serves as a 'nonlinear' version of the Tetrad constraints, resulting in similar structural limitations. Moreover, for discrete latent variable models, Gu and Dunson (2023) and Gu (2022) address only partial structures and their identifiability. For instance, they focus on either measurement models (under the same three-pure children assumption) or pyramid structures, without providing a comprehensive approach for all possible structures. We will include this discussion in the `Related Works' Section. >Q3: In Appendix D, ... a similar discussion for the other assumptions is missing. For instance, in the example in Section 7, the possibility of working on an impure structure is explained but failure cases (and why they happen) are not explained. **A3**: The three-pure children assumption is a sufficient condition that ensures the identifiability of latent variables using the tensor rank condition over a four-way tensor (see more details in the general response). A similar reasoning applies in linear Gaussian latent variable models (e.g., Silva et al. and Kummerfeld et al.). In the presence of an impure structure, the latent variable may not be correctly identified (see **A1**). Additionally, impure structures can arise when the collected variables have direct causal relationships. For example, in a questionnaire survey, the observed variables ‘occupations' and ‘income' may exhibit an impure structure because the attributes of different occupations can directly impact income. >Q4: Support of latent variables: … all latent variables have the same number of categories, with extensions provided in Appendix F”. ... If this is indeed the case, then I suggest the authors explicitly state it in the main paper. **Q4**: Thank you for your careful observation. We will add the following illustrations to Section 4 to avoid confusion. "For simplicity, we focus on the case where all latent variables have the same number of categories. The result can be directly extended to cases with different numbers of categories by sequentially testing the cardinality of the latent support (see details in Appendix F)." >Q5&6: I see that the definitions of metrics are in Appendix I. It would have been nicer to squeeze them into the main paper. **A5&6**: Thank you for your suggestion. We will move the real-world application section into the appendix and add the details of evaluation metrics into the simulation experiment section. >Q7&8: I think your method should be able to handle an arbitrary number of children as long as there are at least 3. …. Perhaps starting from 1k would show the limits of the approach and help us to understand the performance better. **A7&8**: Due to limited space, please see our response in the `Supplementary Experiments' part of the general response section. >Q9: Definition 2.1 (Discrete LSM): Is this a proper name? **A9**: Thank you for your constructive suggestion. We will revise "discrete LSM" to "three-pure discrete LSM (abbreviated as *discrete 3PLSM*) in the revision." >Q10: Theorem 4.7: Please be careful when using $p$ in notations. **A10**: We have revised them to $L_q$ and $X_p$ to avoid confusion. >Q11: Conclusion Section 8: Overall, I liked this conclusion. That being said, being more direct and recalling the main structural assumptions would have been good. **A11**: We have added a recall of the three-pure children assumption and revised the last paragraph in the conclusion as follows: "However, the proposed method can hardly be applied to high-dimensional discrete data and only applies to pure-children structures. Therefore, relaxing these restrictions and making them scalable to high-dimensional real-world datasets and more general structural constraints, such as impure structures and hierarchical structures, would be a meaningful future direction." >Q12: Typos etc. **A12**: Thank you very much for your careful review. We have corrected these issues in the revision. We sincerely thank the reviewer for their careful review and thoughtful suggestions, which have been very instructive. We will incorporate these discussions in the revision to improve our manuscript. If any explanations remain unclear, we welcome further discussion. --- Rebuttal Comment 1.1: Comment: I thank the authors for the well-written rebuttal, it addressed my questions and confusions clearly. I increased my score accordingly to “weak accept”. This rebuttal deserves an explanation for why I am not giving a higher score: It's mainly because I think the considered setting is still somewhat limited — even if the assumptions are justified within the context of the paper’s results. --- Reply to Comment 1.1.1: Title: Thanks for you kind response Comment: Dear Reviewer Snze, We are glad that most of your concerns have been addressed, and we are really grateful that you raised your score to “weak accept”. Sincerely, Authors
Summary: This paper studies the problem of learning causal structures among latent variables from discrete observational data. The author presents a tool, termed the tensor rank condition, to establish the connection between rank constraints of the probability tensor and d-separation relations in the causal graph. The proposed tool appears simple and effective. Based on the tensor rank condition, the author proposes a two-stage algorithm that first learns the causal clusters to identify latent variables and then tests the conditional relations among latent variables using their measured variables. The proposed algorithm extends the identification bound of discrete latent variable models, and the experimental results demonstrate the efficiency of the proposed methods. Strengths: 1. The paper is clearly written and well-organized. 2. The proposed tensor rank condition establishes the connection between algebraic constraints and d-separation relations in discrete causal models. This tool has the potential to explore more causal discovery problems in discrete causal models 3. Compared to traditional methods of the linear latent variable model, such as rank constraints of the covariance matrix, the tensor rank condition takes a meaningful step in latent variable modeling in more general cases. 4. The proposed algorithm looks simple but effective, addressing the identification of discrete latent variable models. Weaknesses: 1. For the sufficient observation assumption, it seems that the cardinality of the observed variable support can be equal to the cardinality of the latent support, as discussed in Remark F.4. Could you clarify this? 2. Why is the Three-Pure Child Variable Assumption required? In my opinion, the tensor rank condition can test the d-separation relations between only two observed variables, as shown in Figure 1. Thus, one can learn the causal cluster from a two-pure child variable assumption. Please correct me if I am wrong. If this assumption does not hold, what happens to the output of your algorithm? 3. In Proposition F.3, $r$ is easily confused with the concept of the cardinality of the support of a single latent variable. It is recommended to change the expression like $\tilde{r}$. Technical Quality: 3 Clarity: 3 Questions for Authors: typo: In Figure 2(b), LVM should be LSM. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and suggestions and thank you for your positive assessment of our work. >Q1: For the sufficient observation assumption, it seems that the cardinality of the observed variable support can be equal to the cardinality of the latent support, as discussed in Remark F.4. Could you clarify this? **A1**: Thank you for your suggestion. You are correct. We only require the cardinality of the observed variables to equal that of the latent variables. We will revise this in the revision. >Q2: Why is the Three-Pure Child Variable Assumption required? In my opinion, the tensor rank condition can test the d-separation relations between only two observed variables, as shown in Figure 1. Thus, one can learn the causal cluster from a two-pure child variable assumption. Please correct me if I am wrong. If this assumption does not hold, what happens to the output of your algorithm? **A2**: Thanks for raising these critical questions. If there are only two pure-measured children for each latent variable, it can result in an indistinguishable structure. For instance, as shown in the proof of Proposition 4.3, the tensor rank condition for a three-variable probability tensor leads to equivalence classes (Fig. 4 (a) $\sim$ (c)). However, with the four-way tensor, the cluster cannot be identified due to the structure illustrated in Fig. 5. Thus, the three-pure measured variable assumption is a sufficient condition to ensure the identification of the measurement model. If this assumption is not met, our algorithm may output a causal structure where the number of latent variables is smaller than the true number. However, to test the conditional independence (CI) relations among the latent variables, only two pure children for each latent variable are necessary (See Theorem 4.7). Thus, given the causal cluster with only two pure children for each latent variable, the CI relations can be tested, and the structure model remains identifiable. One can see more discussions in the general response. > Q3: In Proposition F.3, is easily confused with the concept of the cardinality of the support of a single latent variable. It is recommended to change the expression like r. **A3**: Thank you for your suggestion. We have revised it in the revision. > Q4: typo: In Figure 2(b), LVM should be LSM. **A4**: Thank you for your suggestion. We have corrected it in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My questions have been well addressed. As far as I know, the tensor rank condition is the first to establish the connection between algebraic constraints and d-separation in discrete causal models. Based on this, the identification algorithm for discrete LSM is both simple and effective. I believe these results are meaningful to the NeurIPS community. I raise my score accordingly. --- Reply to Comment 1.1.1: Title: Thank you very much for your positive comment Comment: Dear Reviewer hs7U, Thank you very much for your positive comment! We believe our paper is improved as a direct result of your comments. Sincerely, Authors
Summary: This paper studies the problem of learning latent variable structure in the discrete LSM measurement model. My understanding is that the paper operates under the following assumptions: 1) All latent variables are discrete 2) There are no connections between observed variables (i.e. observed variables are independent when conditioned on latent variables 3) In the unknown causal structure graph, each observed variable has exactly one latent variable parent. 4) The joint probability distribution of observed variables is fully known, and there is no noise in the measurement. 5) Each latent variable has at least three pure variables as children. 6) Other technical assumptions. The authors show that under such assumptions the unknown causal structure can be recovered from an algorithm that studies tensor rank conditions on contingency tables for an observed variable set. Strengths: The problem studied in this paper covers an important case in causal structure learning literature. The measurement model with discrete latent variables is an important special case of causal structure models observed in practice, hence identifiability results and structure learning algorithms are important for this problem. The paper is well-written, and the claims are sound. Theoretical results are supported by experiments. Weaknesses: I am a bit concerned about the novelty of the proposed results and methodology. In particular, [1] seems to obtain a stronger result under similar, if not weaker, assumptions, and this paper does not cite [1]. [1] proposes a structure learning algorithm for measurement model with discrete latent variables, under what seems to be a weaker set of assumptions. In particular [1] does not require each latent variable to have at least three pure children and does not require that each observed variable has only one latent parent. Those assumptions seem quite strong. The technique used in [1] to identify causal structure is also similar in flavor, and also relies on studying the rank of joint probability tensor (through numbers k(S)). Considering that [1] operates under a significantly more general setup while relying on similar techniques and ideas, could the authors please clarify what is the main novelty of this paper compared to [1]? [1] Learning latent causal graphs via mixture oracles B Kivva, G Rajendran, P Ravikumar, B Aragam - Advances in Neural Information Processing Systems, 2021 Technical Quality: 3 Clarity: 3 Questions for Authors: see above ----- During rebuttal score was increased 3 --> 4 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. We will respond to these issues point by point. > Q1: [1] proposes a structure learning algorithm for measurement model with discrete latent variables, under what seems to be a weaker set of assumptions. In particular [1] does not require each latent variable to have at least three pure children and does not require that each observed variable has only one latent parent. Those assumptions seem quite strong. In particular, [1] seems to obtain a stronger result under similar, if not weaker, assumptions, and this paper does not cite [1]. **A1**: Thank you for pointing us to this interesting work. We would like to discuss its connection with our work. The main difference is that the identifiability of [1] requires to assume that the "Mixture Oracle" is known (i.e., the mixture model over $\mathbf{X}$ is identifiable), without discussing how to obtain it, especially in discrete data while we do not require such an assumption. We would like to argue that the requirement of the Mixture Oracle may lead to a weaker identifiability result than ours due to the strict conditions in obtaining the Mixture Oracle. For instance, [2] indicates that a sufficient and necessary condition for identifying the parameters of a discrete mixture model is the presence of $2K-1$ strongly separate variables, where $K$ is the number of mixture components. This structural condition is more stringent than compared to our requirements. The reason is that identifying the mixture model requires learning the latent distribution $P(\mathbf{L})$, while our method does not rely on this. Furthermore, as discussed in Sec. 6 in [1], learning mixture models is a nontrivial problem. The author in [1] uses the approximate method, such as the K-means algorithm, to estimate the number of mixture components $k(S)$ for all $S \subset X$, which can affect the accuracy of the estimates in practice. Meanwhile, the K-means algorithm cannot be directly used in the discrete data, we thus do not include it in our baselines. >Q2: The technique used in [1] to identify causal structure is also similar in flavor, and also relies on studying the rank of joint probability tensor (through numbers k(S)). Considering that [1] operates under a significantly more general setup while relying on similar techniques and ideas, could the authors please clarify what is the main novelty of this paper compared to [1]? **A2**: We would like to clarify the following points: **Identification based on Tensor Decomposition.** Roughly speaking, [1] depends on unique tensor decomposition (actually, a technology of parameter estimation) and requires partial information from a Mixture Oracle. However, it is difficult to obtain the Mixture Oracle and unbiased parameter estimates. In contrast, our theoretical approach focuses on tensor rank constraints across different sets of variables, without directly relying on unique tensor decomposition or any prior information about the Mixture Oracle. Specifically, in [1], the identification of the measurement model relies on unique tensor decomposition, such as the generalized Kruskal's theorem. To construct such a tensor, we require partial information from a Mixture Oracle, such as $k(S)$, where $k(S) = dim(Pa(S))$ according to observation 2.7 in [1]. It is important to note that $k(S)$ for any subset $S \subset X$ cannot be estimated from the tensor rank. As demonstrated by the graphical implications of tensor rank, the tensor rank is determined by the cardinality of the conditional set. That is, $Rank(P(S)) \neq dim(Pa(S)) = k(S)$. For example, consider a structure $L_1 \to L_2$, $X_1$ is children of $L_1$ and $\{X_2, X_3\}$ are children set of $L_2$. In this case, the tensor rank of the joint probability tensor over $S = \{X_1, X_2, X_3\}$ is $|supp(L_2)|$. According to the mixture oracle in [1], it requires that $k(S) = |supp(L_1, L_2)|$. In contrast, our theoretical result does not directly rely on the unique decomposition of tensors, but rather examines tensor rank constraints over various variable sets. A key challenge with the decomposition-based method is the difficulty in obtaining unbiased parameter estimates. Our method avoids this issue by focusing on the number of parameters, specifically the cardinality of latent support, making it a more practical solution. **Novelty.** In our paper, we propose a statistically testable method, termed the tensor rank condition, which establishes the connection between tensor rank and d-separation among observed variables. The tensor rank condition is applicable to general discrete causal models and is not restricted to measurement models. This represents one of our main contributions compared to [1]. Based on the tensor rank condition, we provide an identification algorithm that can determine the causal structure of discrete latent structure models with a theoretical guarantee. Unlike [1], our algorithm offers a solution for discrete latent variable models without requiring any prior knowledge about the mixture model, thereby elegantly eliminating the need for a mixture oracle. Moreover, our method offers a theoretical guarantee and is simple and efficient due to the testability of the tensor rank condition. **Acknowledgements.** Thank you for raising the constructive questions, which have inspired us to further explore discrete latent variable models. We will incorporate these discussions in the revised version, specifically in the 'Related Works' section. If you believe your concerns have been addressed, we kindly request that you consider adjusting your initial score accordingly. If any explanations remain unclear, we are open to further discussion. Reference: [1]. Kivva B, Rajendran G, Ravikumar P, et al. Learning latent causal graphs via mixture oracles. [2]. Tahmasebi B, Motahari S A, Maddah-Ali M A. On the identifiability of finite mixtures of finite product measures. --- Rebuttal Comment 1.1: Comment: I am grateful to the authors for their detailed and constructive rebuttal. I am still concerned about the novelty of the results of this paper as compared to [1]. Let me try to provide some more details regarding my concern. The Mixture Oracle in [1] is essentially only used to count the number of components $k(S)$ in the "mixture decomposition" in (*) $P(S) = \sum_{\ell\in supp(pa(S))}P(S|pa(S) = \ell)P(pa(S) = \ell)$, or in other words to estimate the size of the $supp(Pa(S))$ if assumption 2.4.a from [1] is added, where S is a subset of observed variables. See eq. (2-3) in [1] and Remark 2.6. > We would like to argue that the requirement of the Mixture Oracle may lead to a weaker identifiability result than ours due to the strict conditions in obtaining the Mixture Oracle The Mixture Oracle in the setup of this paper can be easily obtained from Assumption 2.4 (in this paper) and the assumption about three pure children made in this paper. To see this, 1) observe that $P(S|L = \ell)$ is a rank-1 tensor of the form $\sum_{\ell \in supp(pa(S))}\bigotimes_{X_i \in S}P(X_i|pa(X_i) = \ell_j)$ 2) Let $X_i, X_j, X_k$ be pure children of vertex $L_t$, then by assumption 2.4 columns $P(X_i|pa(X_i))$ can be recovered as first modes of minimum rank tensor decomposition of $P(X_i, X_j, X_k)$. 3) Observe that if vectors in each of sets A and B are linearly independent, then all vectors $a\otimes b$ for $a\in A, b\in B$ are linearly independent. Applying this recursively, we get that all vectors $\bigotimes_{X_i \in S}P(X_i|pa(X_i) = \ell_j)$ are linearly independent, and we know the full set of those vectors from 2). Therefore, we can find unique decomposition (*) and count the number of components with non-zero coefficients (in practice, coefficients less than some threshold). 4) note that the decomposition in 3) is not a minimum rank decomposition, as was also pointed out by the authors in the rebuttal, so Theorem 3.3 cannot be used to estimate the number of components. I also suspect that the desired mixture oracle should exist under significantly weaker assumptions than what I outlined above; in particular, using three pure children seems to be an overkill, but I have not spent time trying to simplify these assumptions. ---- Based on the above, Theorem 3.3 appears to be the key novel result of this paper. The algorithm in the sections that follow operates under significantly stronger assumptions than [1]. Theorem 3.3 is an exciting result and, indeed, may provide a more robust approach than what [1] offers. That said, I still think that the assumptions used in this paper are somewhat strong compared to other results in the literature, as also pointed out by other reviewers. After the author's rebuttal, I will increase my current score to 4. --- Rebuttal 2: Comment: Thank you very much for your response. Please allow us to take a little time to clarify further. We want to argue that $supp(Pa(S))$ cannot be directly estimated even under Assumption 2.4 and the three-pure children assumption when the latent structure is unknown and the subset $S$ has different latent variables. We agree that in your example, when we *known* the three observed $X_i$, $X_j$ and $X_k$ is the pure children of $L$, the $supp(Pa(S))$ can be recovered through the tensor decomposition. However, when $S$ has different latent variables and the *unknown* causal structure, the minimum rank tensor decomposition does not necessary correspond to the $supp(Pa(S))$ and it will produce multiple equivalence class as we discussed in the proof of Prop. 4.3 in our paper, such that $supp(Pa(S))$ can not be identified. Consider the structure in Fig. 4(c) in our paper, without loss generality, let $L_2 \to L_1, L_2 \to L_3$, $L_1 \to X_j, L_2 \to X_i$ and $L_3 \to X_k$. Let $S = \{X_i, X_j, X_k\}$. As such, we will have the following decomposition: $$\begin{align}P(S) =& \sum_{i = 1} ^{|supp(L_1)|} \left( \sum_{L_2} P(X_j, L_2|L_1=i) \right) \otimes P(X_i|L_1=i) \otimes \left( \sum_{L_3} P(X_k, L_3|L_1=i) \right) \cdot P(L_1=i)\\\\ =& \sum_{i = 1} ^{|supp(L_1)|} \left( \sum_{L_2} P(X_j, L_2|L_1=i) \right) \otimes P(X_i|Pa(X_i)=i) \otimes \left( \sum_{L_3} P(X_k, L_3|L_1=i) \right) \cdot P(L_1=i) \end{align}$$ where the vectors $\sum_{L_2} P(X_j, L_2|L_1=i)$ and $\sum_{L_3} P(X_k, L_3|L_1=i)$ also are linear independent. In this case, we can find a unique decomposition (based on Kruskal's theorem), but not follow the form of (*), i.e., $P(S) \ne \sum_{\ell \in supp(Pa(S))} P(S|Pa(S) = \ell) P(Pa(S) = \ell)$. Moreover, in our work, the conditional independence between latent variables can be tested by the tensor rank condition (Theorem 4.7), without assuming the existence of the mixture model and recovering the distribution of latent variables. We are very grateful for your patient response. Please feel free to reach out if you have any further questions. Title: Thank you very much for your response --- Rebuttal 3: Comment: I believe authors misunderstood points 3 and 4 in my comment above. I do not claim that (*) is obtained via minimal rank decomposition, as authors aim to refute in their response. 3 and 4 above use that any vector has a unique decomposition in a basis in a linear space. To give a simplified example: Let $a_1, a_2 \in R^n$ be linearly independent and $b_1, b_2 \in R^m$ be linearly independent, then vectors $a_1\otimes b_1, a_2\otimes b_1, a_1\otimes b_2, a_2\otimes b_2$ are all linearly independent (as vectors in $R^{nm}$) and for a vector $T = \alpha_1 a_1\otimes b_1 +\alpha_2 a_2\otimes b_2 +\alpha_3 a_3\otimes b_3$ one can uniquely recover this decomposition if $a_i$ and $b_i$ are known, even if it tensor has smaller rank (say, 2). One can recover vectors $P(X_i|pa(X_i))$ by looking at minimum rank decompositions of triples of variables and use it to recover necessary decompositions of $P(S|Pa(S))$ (and hence get mixture oracle), as I outlined above. Please feel free to reach out if something is still not clear. --- Rebuttal Comment 3.1: Title: Thanks for the clarification in helping us algining this issues Comment: Thanks for the clarification in helping us algining this issues. It seems that you treat the $Pa(X_i|Pa(X_i))$ as the basis, thereby the tensor $P(S|Pa(S))$ can be recovered. However, we want to argue that without knowing the structure information we will have different basis when we probe at different triples and we do not know which one is the "correct" basis (here $Pa(X_i|Pa(X_i))$) we are looking for. For example, when there exists a minimum rank decomposition for $X_i, X_j, X_k$, the basic can be equal to $Pa(X_j|L_i) \neq Pa(X_j|Pa(X_j))$ (see the example in the previous response). In this case, we do not know if $Pa(X_j|L_i)$ is $Pa(X_j|Pa(X_j))$. Moreover, we want to show that, without the structure information, $Pa(X_i|Pa(X_i))$ may not be identifiable due to violation of the unique decomposition condition. For example, consider a structure with three observed variables $X_i, X_j, X_k$ and $L_1, L_2, L_3$ are their latent parent, respectively. To obtain the minimum rank decompositions of $X_i, X_j, X_k$ for estimating $Pa(X_i|Pa(X_i))$, it must satisfy the Kruskal's condition, that is, $Rk(P(X_i|L))+Rk(P(X_j|L))+Rk(P(X_k|L)) >= 2r+2$, where $r$ is the cardinality of latent support L, where $L = \[{Pa(X_i), Pa(X_j), Pa(X_k)\}]$. However, when the cardinality of the latent and observed support both are $d$, one can see that $Rk(P(X_i|L))+Rk(P(X_j|L))+Rk(P(X_k|L)) = 3d < 2d^{3}+2$. Therefore, $Pa(X_i|Pa(X_i))$ cannot be recovered due to a violation of the Kruskal condition. And it also violate the condition of 2K-1 strongly separate variables in [2], a sufficient and necessary condition for identifying the parameters of a discrete mixture model. We are very grateful for your patient response. Please feel free to reach out if you have any further questions. --- Rebuttal 4: Comment: I am grateful for your response. > However, we want to argue that without knowing the structure information we will have different basis when we probe at different triples and we do not know which one is the "correct". It is actually easy to identify the "correct" one. Let's fix $X_i$ and look at triples with different $X_j, X_k$. If minimum rank decomposition has more than d components we can immediately refute triple as incorrect, which handles your concern about uniqueness of the decomposition. By Kruskal's theorem, those triples that admit decomposition of rank at most d will have a unique decomposition. Decompose all such tensors and compile a list of first-mode components for each of them. Note that $P(X_i|L = \ell) = \sum_{t} P(X_i|L = \ell, Pa(X_i) = t) P(Pa(X_i) = t| L = \ell) = \sum_{t} P(X_i| Pa(X_i) = t) P(Pa(X_i) = t| L = \ell)$ and all $P(Pa(X_i) = t| L = \ell)$ are non-negative. This means that each vector of the form $P(X_i|L = \ell)$ belongs to the convex hull of the "correct" set of vectors $P(X_i| Pa(X_i) = t)$. Since all P(X_i| Pa(X_i) = t) are linearly independent by assumption 2.4, they are the unique "defining directions" of the convex cone formed by all first mode components of tensor decompositions of $P(X_i, X_j, X_k)$ with a fixed $i$. Note also that there is no ambiguity about the direction (sign) of the vectors in the decomposition, as we expect every vector participating in the decomposition to have positive entries. Note that no structure information is used, and one do not need to assume that all latent variables have the same support, etc. > Moreover, we want to show that, without the structure information, $Pa(X_i|Pa(X_i))$ may not be identifiable due to violation of the unique decomposition condition. By assumptions of this paper, the triples of variables $X_i, X_j, X_k$ which are of interest to construct "mixture oracle" have unique decomposition of rank at most $d$ ($d$ here can be replaced with the size of the support of observed variables, so knowledge of $d$ is not needed). All other triples can be ignored. Please let me know if you have any questions. --- Rebuttal 5: Title: Many thanks for your question Comment: **Dear Reviewer tR91,** Thank you for your effort and your time. We would like to argue that by fixing $X_i$ and looking at triples with different $X_j, X_k$, we may not be able to figure out that the triple is incorrect. For example, consider a simple structure $L_1 \to L_2 \to L_3$, where each latent varaible has three pure children, corresponding to $X_{123}$, $X_{456}$, and $X_{789}$, respectively. When we fix $X_1$ and look at any $X_j, X_k \in \overline{X_1}$, one can see that there is still a minimum rank decomposition with d components (where d represents the cardinality of latent support). This is because any three observed variables can be d-separated by only one latent variable in this structure (see the proof of Prop. 4.3). For instance, let $X_j=X_4$ and $X_k = X_7$, which is the example in the previous response. Thus, we cannot check if the basic $P(X_j|L) $ is actually $P(X_j|Pa(X_j))$ from the decomposition. Please let us take this opportunity to highlight our contributions, in order to situate this work in the rich literature: 1. We introduce a statistically testable tool, termed the *tensor rank condition* , which establishes a connection between rank constraints and d-separation relations within a discrete model. 2. Utilizing the tensor rank condition, we achieve the identification of the measurement model under the three pure children assumption. 3. Furthermore, based on the tensor rank condition, we complete the identification of the structural model, requiring only two pure children for each latent variable. In [1], the authors leverage information from a mixture oracle and perform tensor decomposition to identify the measurement model, recovering the distribution to learn the latent variable structure. However, the identification approach in this work relies on the existence of a mixture oracle, and estimating the mixture oracle is a non-trivial task. Most importantly, we want to emphasize that the tensor rank condition is not restricted to any specific mixture model and does not rely on information from such models. The discrete causal structure can be discovered using the tensor rank condition, which allows for identifying edges between observed variables and even permits causal directions from observed to latent variables. We believe that the tensor rank condition opens up new research directions and offers promising avenues for developing search algorithms for latent variable researchers. Moreover, as we discussed in the general response, the three pure children assumption is merely a sufficient condition for identifying the measurement model by tensor rank. We also explored potential extensions to more general conditions, such as impure structures or hierarchical structures, for identifiability. We discussed that the tensor rank condition has the capacity to handle these cases, which we outline as directions for future work. If you have further feedback, we hope to read it and hope for the opportunity to respond to it. We highly appreciate your engagement in the discussion. --- Rebuttal Comment 5.1: Comment: > We would like to argue that by fixing X_i and looking at triples with different X_j, X_k, we may not be able to figure out that the triple is incorrect. I explained above how this can be done by taking convex hull of vectors in the first mode of the decomposition. I think the authors in the subsequent paragraph refute something I did not claim. >However, the identification approach in this work relies on the existence of a mixture oracle, and estimating the mixture oracle is a non-trivial task. Performing the steps I described above provides the desired mixture oracle and gives identifiability results out of the box. > We introduce a statistically testable tool, termed the tensor rank condition , which establishes a connection between rank constraints and d-separation relations within a discrete model. *I agree that this is a novel and an interesting contribution of this paper!* It is nice to see how this can be used for the identification of the structure. However, as I outlined above, the identifiability results for the measurement model essentially follow from weaker assumptions from prior work. It is possible that the proposed method is more statistically robust under the stronger assumptions of this paper, but this is not how this paper describes its contribution. --- Reply to Comment 5.1.1: Comment: We are very grateful for your thoughtful comments, which we basically agree with. Below please let us share some thoughts. In contribution [1], the authors show that to learn the measurement model, only the $k(S)$ in the mixture oracle is required. In our previous response, we mentioned that recovering the complete information about the mixture oracle generally requires strict assumptions--we hope you also agree with it, since it involves identifying more parameters of the mixture oracle. Regarding your point about the identification of the measurement model under a weaker assumption, we acknowledge that it is indeed possible since only $k(S)$ is needed. We appreciate this insight. At the same time, we would like to highlight that our method offers a more statistically robust approach for achieving this under the three pure children assumption, which you also pointed out. This assumption often holds in practice, especially when data is gathered through questionnaires, which is common in fields including social science, psychology, and healthcare. That being said, we feel that our work goes beyond just this aspect. As you mentioned, we provide a testable tool that offers a stronger ability to discover causal structures. Additionally, it's nice to note that, given the measurement model, the causal structure among latent variables is identifiable with only two pure children needed for each latent variable. We believe this is another major contribution of our work. We are very grateful for your patient response. Please let us know if you have any further questions--we greatly appreciate the opportunity to discuss with you.
Rebuttal 1: Rebuttal: **General Response** We thank the reviewers for their efforts in reviewing our manuscript and for the insightful comments and suggestions. Please see below for our general response. **Sufficient Condition: Three-Pure Children Assumption.** To ensure the identifiability of latent variables, the pure children assumption is generally required and well-studied in the related literature (e.g., Silva et al., 2006; Kummerfeld and Ramsey, 2016; Chen et al., 2024; Bollen, 2002; Bartholomew et al., 2011), as discussed in the first paragraph of the introduction in our manuscript. Besides, the three-pure children assumption is necessary to ensure that the latent variable can be identified by the tensor rank condition. The reason is as follows. If there are only two pure-measured children for each latent variable, it can lead to an indistinguishable structure. For example, as shown in the proof of Proposition 4.3, the tensor rank condition for a three-variable probability tensor has equivalence classes (Fig. 4 (a) $\sim$ (c)). However, for the four-way tensor, the cluster cannot be identified due to the structure shown in Fig. 5. Thus, the three-pure measured variable assumption is a sufficient condition to ensure the identification of the measurement model. If this assumption is violated, our algorithm may output a causal structure where the number of latent variables is smaller than the true number. However, to test the conditional independence (CI) relations among the latent variables, only two pure children for each latent variable are required (See Theorem 4.7). That is, given the causal cluster with only two pure children for each latent variable, the CI relations can still be tested, and the structure model remains identifiable. **Supplementary Experiments (Response to #Reviewer Snze).** Due to limited space, we are responding to the supplementary experimental part here. We aim to (i) evaluate the performance of our algorithm with different numbers of measured variables, and (ii) explore the behavior of our algorithm when the number of samples is much smaller. The results are reported in Tables 1 to 4 in our attached PDF. In Table 1 $\sim$ 2, one can see that our method still achieves good performance even when the number of measured variables is different. In fact, it can enhance the accuracy in determining the number of latent variables, as a latent variable can be identified from only its three pure children using the tensor rank condition. Moreover, in Table 3 $\sim$ 4, the performance of our method is not good when the sample size is small (e.g., 1k sample size) because the tensor rank is inaccurately calculated in such cases. The 'mismeasurements' metrics are also lower because most of the observed variables are grouped into the same cluster in the procedure of finding causal clusters. As the sample size increases, our method achieves better performance. Pdf: /pdf/8d293763a28b5014b6db7533706e43e2fc707a68.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Addressing Hidden Confounding with Heterogeneous Observational Datasets for Recommendation
Accept (poster)
Summary: The paper studies the problem of selection bias due to hidden confounding in recommendation systems. Previous methods struggle with real-world application due to reliance on strong assumptions or unavailable RCT data. The proposed solution, MetaDebias, leverages heterogeneous observational data, which is more practical and readily available. The approach involves a meta-learning framework that uses both confounded observational data and unconfounded observational data for model training. Experiments show that MetaDebias consistently outperforms baseline methods across various metrics and conditions. Strengths: (S1) The paper compares the proposed approach against many existing methods on 3 benchmark recommendation datasets under several commonly used evaluation metrics. Weaknesses: (W1) I found the use of the term heterogenous to describe the observational data a bit vague and misleading. I think the authors should make very clear from the beginning that they are working with one dataset where they can observe everything (no latents) and one dataset where some variables are unobserved. (W2) I think the related work section on causal inference (Appendix A.1) could be improved. In particular, it is not clear what is the weakness of the data fusion methods since they are clustered with IVs and negative controls. Moreover, I think some relevant works that use randomized trials to address hidden confounding are missing, for example [1] and [2]. (W3) The (empirical) performance improvement over previous methods is not so strong. (Minor) Typo in Lemma 1, should be "as follows" not "as followed". [1] Hidden yet quantifiable: A lower bound for confounding strength using randomized trials. De Bartolomeis et al. AISTATS '24 [2] Falsification before Extrapolation in Causal Effect Estimation. Hussain et al. NeurIPS '22 Technical Quality: 2 Clarity: 2 Questions for Authors: (Q1) Could you explain with more details how are you assessing the statistical significance of the performance improvement over the previous methods? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. Below, we hope to address your concerns and questions to improve the clarity and readability of our paper. > **[W1] I found the use of the term heterogeneous to describe the observational data a bit vague and misleading.** **Response:** We thank the reviewer for pointing out this issue. In fact, we use the term "heterogeneous" to distinguish whether the data meets the unconfoundedness assumption, i.e., $o \perp (Y_{0}, Y_{1}) \mid x$. Samples with fully observed features or from randomized controlled trial (RCT) meet such independence assumption and are denoted as $g=1$, while samples with missing features that do not satisfy the assumption are denoted as $g=0$. As suggested by the reviewer, we will clearly specify the dataset information and above difference in the revised manuscript's introduction. > **[W2] I think the related work section on causal inference (Appendix A.1) could be improved.** **Response:** We thank the reviewer for pointing out this issue and apologize for the lack of clarity. As suggested by the reviewer, below we will introduce more relevant works and discuss their limitations. - **Statistical test based methods** introduce statistical tests to compare the causal effects estimated from observational studies and randomized trials, thereby detecting and mitigating hidden confounding [1, 2, 3, 4]. **Correction based methods** are proposed to correct the biased causal effect estimation derived from observational data using the unbiased RCT data [5, 6, 7], and an efficient integrative estimator is established based on semi-parametric theory [8], and further integrated with machine learning models [9]. **Weighted methods** propose to train a biased estimator using observational data, train an unbiased estimator using RCT data, and take the weighted average of these two estimators as the final result [10, 11, 12]. - A **limitation** of data fusion methods is the availability of RCT data, as the cost of obtaining RCT data is prohibitively high. Moreover, for the correction based method, the randomized trial and observational study should share the same support sets. When the support sets differ and only a partial overlap exists, additional strong parametric assumptions are required for extrapolation, for instance, the hidden confounding effect is assumed to be a linear function [5]. > **[W3] The performance improvement over previous methods is not so strong.** **Response:** Thank you for the comments. In fact, we carefully tune the parameters of all baselines, so that some baseline results seem competitive. Even though, MetaDebias significantly outperforms these baselines under p-value less than 0.05. To further address your concern, we add experiments on PCIC and Ali-CCP datasets. Due to the space limit, some results are shown below, and please refer to the PDF for more details. |AUC Metric|PCIC|Ali-CCP| |:--:|:--:|:--:| |Naive|0.694±0.005|0.590±0.013| |DR|0.701±0.003|0.608±0.005| |ESCM2-DR|0.706±0.004|0.611±0.008| |Res-DR|0.709±0.004|0.614±0.005| |MetaDebias|**0.715*±0.005**|**0.626*±0.006**| || The results of the statistical significance test indicate that MetaDebias significantly outperforms the baselines. > **[Minor] Typo in Lemma 1.** **Response:** We thank the reviewer for pointing out this issue, and we will carefully polish the revised paper to avoid typos and improve readability. > **[Q1] Could you explain with more details how are you assessing the statistical significance of the performance improvement over the previous methods?** **Response:** Thanks for the question. Actually, we use a paired t-test to examine whether the proposed method significantly outperform the **optimal** baseline under different metrics across various datasets. The specific testing procedure is as follows: - First, we conduct independent repeated experiments to calculate the mean and variance of all methods on the specified metrics. - Next, the t-statistic is calculated using the mean and variance from both the proposed MetaDebias and the optimal baseline results. - Finally, we identify the critical value based on the pre-specified degrees of freedom and significance level, and compare it with the absolute value of the t-statistic. If the absolute value of t-statistic exceeds the critical value, the difference is considered significant. For the empirical implementation, we use the ‘stats.ttest_rel’ function from the SciPy library in Python. *** **We sincerely thank you for your feedback and will provide more clarifications and explanations in the revised version, and welcome any further technical advice or questions on this work and we will make our best to address your concerns.** *** **References** [1] Hidden yet quantifiable: A lower bound for confounding strength using randomized trials [2] Falsification before Extrapolation in Causal Effect Estimation [3] Falsification of internal and external validity in observational studies via conditional moment restrictions [4] Benchmarking Observational Studies with Experimental Data under Right-Censoring [5] Removing hidden confounding by experimental grounding [6] Combining observational and randomized data for estimating heterogeneous treatment effects [7] Elastic integrative analysis of randomised trial and realworld data for treatment heterogeneity estimation [8] Improved Inference for Heterogeneous Treatment Effects Using Real-World Data Subject to Hidden Confounding [9] Integrative R-learner of heterogeneous treatment effects combining experimental and observational studies [10] Adaptive combination of randomized and observational data [11] Combining observational and experimental datasets using shrinkage estimators [12] Combining multiple observational data sources to estimate causal effects --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. In light of the clarifications and additional experiments I will raise my score. --- Rebuttal 2: Title: Thank you for raising your score! Comment: We are happy that our clarifications and additional experiments addressed your current concerns. We look forward to your continued support during the follow-up discussion period -- thank you so much!
Summary: The paper addresses the issue of selection bias in recommender systems, particularly when hidden confounding factors are present. The authors propose a new approach using heterogeneous observational data, where some data is affected by hidden confounding and some is not. The proposed MetaDebias is a meta-learning based debiasing method that models oracle prediction errors and bias from hidden confounders using bi-level optimization for training. Experiments on three public datasets demonstrate that MetaDebias achieves the best performance despite the presence of hidden confounding. Strengths: Addressing hidden confounders in recommender systems is a significant issue. The approach is quite novel. The experimental evaluation is well done, with a clear explanation of most aspects of the experimental evaluation. The related work is broadly covered and compared to. Weaknesses: In the comparative experiments shown in Table 1, the improvement of the MetaDebias algorithm over other algorithms is relatively small. Although the paper is well-organized, the presentation quality needs improvement. For example, some abbreviations should be spelled out when first mentioned, such as Random Controlled Trial (RCT) in the abstract. Technical Quality: 4 Clarity: 3 Questions for Authors: How does the proposed MetaDebias algorithm perform in terms of time and space efficiency compared to the baseline? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your approval of the idea and the novelty of this work, and thank you for the helpful suggestions. Below, we hope to address your concerns and questions to improve the clarity and readability of our paper. > **[W1] In the comparative experiments shown in Table 1, the improvement of the MetaDebias algorithm over other algorithms is relatively small.** **Response:** Thank you for the comments. In fact, we carefully tune the parameters of all baselines, so that some baseline results seem competitive. Even though, MetaDebias significantly outperforms these baselines under p-value less than 0.05. To further address your concern, we add experiments on PCIC and Ali-CCP datasets to validate the effectiveness of our method. - PCIC dataset contains 19420 biased ratings and 2040 unbiased ratings derived from 1000 users evaluating 1720 movies. Ali-CCP is a large-scale public dataset for click and conversion prediction in industrial scenarios, where the training, validation, and test data consist of 38 million, 4.2 million, and 43 million records, respectively. - The AUC performance on both datasets are shown below. |AUC Metric|PCIC|Ali-CCP| |:--:|:--:|:--:| |Naive|0.694±0.005|0.590±0.013| |IPS|0.696±0.004|0.602±0.004| |DR|0.701±0.003|0.608±0.005| |ESMM|0.695±0.005|0.592±0.006| |ESCM2-IPS|0.705±0.003|0.608±0.009| |ESCM2-DR|0.706±0.004|0.611±0.008| |Res-IPS|0.706±0.003|0.611±0.007| |Res-DR|0.709±0.004|0.614±0.005| |MetaDebias|**0.715*±0.005**|**0.626*±0.006**| || The results show that the proposed MetaDebias stably outperforms the baseline methods across both datasets, and achieves a significant performance improvement on the large-scale industrial dataset Ali-CCP. Please refer to the attached PDF for more results. > **[W2] Although the paper is well-organized, the presentation quality needs improvement. For example, some abbreviations should be spelled out when first mentioned, such as Random Controlled Trial (RCT) in the abstract.** **Response:** We thank the reviewer for pointing out this issue and apologize for the lack of clarity. We will carefully polish the revised manuscript to improve readability. > **[Q1] How does the proposed MetaDebias algorithm perform in terms of time and space efficiency compared to the baseline?** **Response:** Thanks for the question. As suggested by the reviewer, we compare the parameter size, training time (minutes) and inference time (milliseconds per sample) of different methods on the Coat, Yahoo! R3 and KuaiRec datasets. The results are shown below. |KuaiRec Dataset|Parameters|Training|Inference| |:--:|:--:|:--:|:--:| |Naive|$1 \times$|91.093|0.258| |DR|$3 \times$|219.948|0.274| |TDR|$3 \times$|135.778|0.285| |Multi-DR|$3 \times$|238.706|0.265| |ESCM2-DR|$3 \times$|104.668|0.782| |BRD-DR|$5 \times$|198.872|0.249| |KD-Label|$2 \times$|163.358|0.239| |Autodebias|$4 \times$|204.555|0.632| |LTD-DR|$3 \times$| 253.277|0.267| |Bal-DR|$5 \times$|157.784|0.257| |Res-DR|$5 \times$|165.444|0.244| |MetaDebias|$5 \times$|174.866|0.254| || |Yahoo!R3 Dataset|Parameters|Training|Inference| |:--:|:--:|:--:|:--:| |Naive|$1 \times$|0.574|0.230| |DR|$3 \times$|6.599|0.258| |TDR|$3 \times$|6.227|0.265| |Multi-DR|$3 \times$|5.906|0.283| |ESCM2-DR|$3 \times$|1.098|0.262| |BRD-DR|$5 \times$|2.973|0.261| |KD-Label|$2 \times$|2.449|0.358| |Autodebias|$4 \times$|1.662|0.243| |LTD-DR|$3 \times$| 1.832|0.262| |Bal-DR|$5 \times$|1.702|0.236| |Res-DR|$5 \times$|3.927|0.288| |MetaDebias|$5 \times$|6.375|0.263| || |Coat Dataset|Parameters|Training|Inference| |:--:|:--:|:--:|:--:| |Naive|$1 \times$|0.206|0.250| |DR|$3 \times$|0.793|0.498| |TDR|$3 \times$|0.693|0.472| |Multi-DR|$3 \times$|0.552|0.515| |ESCM2-DR|$3 \times$|0.462|0.578| |BRD-DR|$5 \times$|0.682|0.347| |KD-Label|$2 \times$|1.124|0.519| |Autodebias|$4 \times$|1.182|0.395| |LTD-DR|$3 \times$| 2.351|0.459| |Bal-DR|$5 \times$|2.912|0.513| |Res-DR|$5 \times$|0.674|0.387| |MetaDebias|$5 \times$|1.916|0.521| || - **Space efficiency:** For all methods, we employ a multi-layer perceptron network to model the prediction model, while the same architecture is also used for the propensity and imputation model. As shown in tables above, the Naive method employs only a single prediction model to fit the training data, with the parameter size denoted as $1 \times$, while the Doubly Robust (DR) method further incorporates both propensity and imputation model to achieve double robustness, with the parameter size denoted as $3 \times$. The parameter size of the proposed MetaDebias method is comparable to that of some existing methods such as Res-DR, indicating that the proposed method outperforms the competitive baselines under the same parameter size. - **Time efficiency:** Despite the involvement of five models in the training process, the comparison with other baseline methods reveals that the training time of the proposed approach is acceptable, particularly on the large-scale dataset KuaiRec. A potential reason for the relatively long training time required by proposed method is that the bi-level optimization process with assumed updates results in multiple gradient computations throughout the training procedure. - In summary, the computational resource demands of proposed method are acceptable. The experimental results can also be found in the one-page attached PDF. *** **We sincerely thank you for your feedback and will provide more clarifications and explanations in the revised version, and welcome any further technical advice or questions on this work and we will make our best to address your concerns.** --- Rebuttal Comment 1.1: Comment: Thank you for your response. I really like the work. It will be better if the authors could provide more details about the PCIC and Ali-CCP datasets. Is PCIC public access? Are the ratings in test set of Ali-CCP unbiased? --- Rebuttal 2: Title: We are happy to provide more details about the PCIC and Ali-CCP datasets! Comment: Thank you for engaging with our responses and we are highly encouraged to know that "you really like our work". In below, we are happy to provide more details about the **PCIC** and **Ali-CCP** datasets. - **PCIC is a public dataset** for evaluating debiasing algorithms in recommendations [1, 2]. In the training set, users are free to choose items to rate, resulting in 19,420 biased ratings. While in the test set, users are required to rate randomly exposed items, resulting to 2,040 unbiased ratings. **FYI, the PCIC dataset is public available: https://competition.huaweicloud.com/information/1000041488/introduction.** - **Ali-CCP is a public dataset** gathered from real-world traffic logs of the recommender system in an e-commerce platform [3]. The training and test set are split along the time sequence of traffic logs, which is a traditional industrial setting. Specifically, the latter 1/2 data in the time sequence are split to be test set, thus **the test data is not exactly unbiased, but with different training the inference space, which is similar to our debiased recommendation problem setup**. In summary, Ali-CCP is also a widely adopted debiased recommendation dataset especially used for **post-click conversion rate (pCVR) estimation** task in recommendation systems [3, 4, 5, 6, 7, 8]. **The Ali-CCP dataset is public available: https://tianchi.aliyun.com/datalab/dataSet.html?dataId=408.** *** We will definitely put the above experimental details and results into our revised manuscript -- thank you so much! *** **References** [1] Mengyue Yang et al. Debiased Recommendation with User Feature Balancing. Transactions on Information Systems 2023. [2] Mengyue Yang et al. Generalizable Information Theoretic Causal Representation. arXiv. [3] Xiao Ma et al. Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate. SIGIR 2018. [4] Wenhao Zhang et al. Large-scale Causal Approaches to Debiasing Post-click Conversion Rate Estimation with Multi-task Learning. WWW 2020. [5] Dongbo Xi et al. Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising. KDD 2021. [6] Hao Wang et al. ESCM2: entire space counterfactual multi-task model for post-click conversion rate estimation. SIGIR 2022. [7] Xiaofan Liu et al. Task Adaptive Multi-learner Network for Joint CTR and CVR Estimation. WWW 2023. [8] Xinyue Zhang et al. Adversarial-Enhanced Causal Multi-Task Framework for Debiasing Post-Click Conversion Rate Estimation. WWW 2024. --- Rebuttal Comment 2.1: Comment: Thanks for your response. I will keep my score "7: Accept".
Summary: This paper proposes to use heterogeneous observational data to address hidden confounding in recommender system. Strengths: + Addressing selection bias in recommender system is very important. + If the assumption holds, i.e., confounder missing mechanism follows the user attribute missing mechanism, I would say the method makes sense to me, though it is complicated. + Experiments seem to demonstrate the effectiveness of the proposed method. Weaknesses: - It seems to me that using missing feature mechanism to estimate the missing confounder mechanism is a over simplification of the problem, where the author provide no strong empirical evidence to demonstrate the reliability of this simplification. - If confounders are just missing features, why don't we just infer them from the data? Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to my summary of weakness. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s great efforts and insightful comments to improve our manuscript. Below, we hope to address your concerns and questions to improve the clarity and readability of our paper. > **[W1] It seems to me that using missing feature mechanism to estimate the missing confounder mechanism is a over simplification of the problem, where the author provide no strong empirical evidence to demonstrate the reliability of this simplification.** **Response:** Thank you for the comments. Below, we will demonstrate why hidden confounders are equivalent to missing features within the potential outcome framework, and introduce the motivation of our work. - In fact, we follow previous works and adopt the potential outcome framework to formalize the debiasing problem in the presence of hidden confounding. Specifically, we define $X$ as the features of user-item pair, such as user gender and item color, which influence both click and purchase behaviors. Here, click $T$ and purchase $Y$ are respectively defined as the treatment and outcome. - Notably, previous works rely on the unconfoundedness assumption (also known as ignorability), i.e., $T \perp (Y_{0}, Y_{1}) \mid X$, where $Y_{0}$ and $Y_{1}$ are potential outcomes. In causal inference, this indicates that given the observed features, the purchase outcome if the user clicks is independent of the click behavior itself. In recommendations, this indicates the observed features are sufficient to model both click and purchase behaviors. - However, the assumption may sometimes be violated. For example, user income is the feature that may simultaneously influence both clicks and purchases, but if it is missing, it leads to a violation of the unconfoundedness assumption. Therefore, hidden confounders are equivalent to missing features. In fact, it is a widely adopted setup in many previous studies. - Note that, when only observational data with hidden confounding is available, even with strong assumptions, it remains difficult to eliminate the hidden confounding bias. This illustrates why recent works propose incorporating RCT data for calibration. However, the cost of acquiring RCT data is exceptionally high due to the requirement for random assignment of treatment. In recommendation scenarios, this requires users to randomly click and rate items. This motivates us to utilize observational data with fully observed features rather than RCT data to address hidden confounding. > **[W2] If confounders are just missing features, why don't we just infer them from the data?** **Response:** Thank you for the question. Below, we will discuss the feasibility of feature imputation methods from both theoretical and experimental perspectives. In this study, we categorize the data into two groups, $g=1$ and $g=0$, based on whether they satisfy the unconfoundedness assumption. In fact, the samples with $g=1$ can be further categorized into two types. - As shown in the motivation graph in the paper, the first case where $g=1$ corresponds to complete feature collection. With fully observed features, it is possible to estimate the joint distribution of all features, and allows the missing feature inference as the reviewer noted. However, there may exist selection bias, meaning that the features are missing not at random, which hinders the accurate missing feature imputation. - Below are the results of three different feature imputation methods on three benchmark datasets. The Sample-Imp method imputes missing values by sampling from a Gaussian distribution, where the mean and variance of the distribution are estimated using the features with $g=1$. The Naive-Imp method aims to learn a model for imputing missing features from observed features by training on the naive loss on the samples with $g=1$, while IPW-Imp further incorporates propensity scores to account for selection bias. |NDCG@K|Coat|Yahoo!R3|KuaiRec| |:--:|:--:|:--:|:--:| |Naive|0.444±0.014|0.489±0.009|0.540±0.009| |Sample-Imp|0.449±0.012|0.498±0.012|0.545±0.006| |Naive-Imp|0.458±0.013|0.496±0.013|0.556±0.008| |IPW-Imp|0.462±0.011|0.500±0.009|0.563±0.006| |MetaDebias|**0.473*±0.010**|**0.544*±0.005**|**0.584*±0.003**| || The results show that the feature imputation methods achieve performance improvements compared to the naive approach, but the proposed MetaDebias method still significantly outperforms all baselines. Moreover, IPW-Imp outperforms Naive-Imp, indicating that the features are missing not at random and imputing missing values remains a challenge. - RCT data satisfy the independence condition of unconfoundedness assumption, and thus is also labeled as $g=1$. Such data is obtained through the random assignment of treatment, and does not require the collection of complete features. In this case where complete features are absent as inferring labels, feature imputation methods often struggle to achieve good performance. - Below, we compare the feature imputation method with proposed MetaDebias, where samples with $g=1$ are all derived from randomized trials, and the Random-Imp method employs random values from [-0.5, 0.5] to impute missing feature values. |NDCG@K|Coat|Yahoo!R3|KuaiRec| |:--:|:--:|:--:|:--:| |Naive|0.450±0.012|0.498±0.015|0.545±0.006| |Random-Imp|0.452±0.013|0.492±0.013|0.542±0.006| |MetaDebias|**0.468*±0.012** |**0.536*±0.008**|**0.581*±0.003**| || Experimental results indicate that when samples with $g=1$ are from randomized trials, the performance of feature imputation method is even inferior than the naive approach that does not involve imputation, and significantly worse than the proposed MetaDebias. In this case, feature imputation fails. *** **We hope the above discussion will fully address your concerns about our work, and we would really appreciate it if you could be generous in raising your score.** We look forward to your insightful and constructive responses to further help us improve the quality of our work. Thank you! --- Rebuttal 2: Title: We would like to supplement further clarification for the equivalence of "Hidden Confounding" and "Missing Features (Covariates)". Comment: Dear Reviewer TTAY, Thank you again for your time to review our paper and thoughtful feedback. In below, we would like to further supplement two main claims to help readers understand our problem setup. **Main Claim 1: For the potential outcome framework [1, 2] in causal inference, ```hidden confounding``` and ```missing features (covariates)``` are equivalent, supported by the relevant literature published in important venues [3, 4, 5, 6].** - In [3], Section 2 (Binary Outcome) on page 3, the ```unmeasured confounder``` is defined as $U$, whereas in Section 3 (Survival Time) on page 8, $U$ is referred to as a ```covariate```. - In [4], on page 1, the authors explicitly wrote that _“throughout the article we use the term ```‘unmeasured confounder’``` rather than using terms such as ```‘omitted variable’``` or ```‘unobserved covariate’``` for the sake of consistency and clarity”_. - In [5], the ```unobserved covariate``` $U$ is first defined in the unconfoundedness assumption in Section 2 (Problem Statement and Preliminaries) on page 2. In the following Section 2.1 (Related Work), $U$ is explicitly referred to as the ```unobserved confounder```. - In [6], Section 2 (Preliminaries and Challenges to Identification) on page 5, the problem formulation is explicitly stated as follows, _"we are interested in the effects of $X$ on $Y$ , which may be ```confounded by a vector of $q$ unobserved covariates $U$```”_, which indicates the equivalence of confounders and covariates. **Main Claim 2: In recommendation systems, prior works addressing hidden confounding similarly builds on the equivalence between ```hidden confounding``` and ```missing features``` [7, 8, 9].** - In [7], Section 2 (Problem Formulation) on page 2, $x_{u,i}$ is defined as the ```observed features``` of user-item pair $(u, i)$, and is considered to be the ```measured confounders```. In addition, ```unmeasured confounders``` $h_{u,i}$ refer to the ```unobserved features```, as shown in Figure 2. - In [8], the authors claim that “we assume that ```all confounders consist of a measured part $x_{u,i}$ and a hidden (unmeasured) part $h_{u,i}$```, where the latter arises from issues such as information limitations (e.g., friend suggestions) and privacy restrictions (e.g., user salary)” in the Section 2 (Problem Setup) on page 3. - In [9], the authors state that “The ```observed feature/confounder``` $x_{u,i}$ refers to the ```observed feature``` vector from the user $u$, item $i$”, which indicates the consistency between the confounders and the features. From above, **we conclude that the terminologies ```hidden confounders``` and ```unobserved features``` are exactly equivalent, not over-simplification, in the potential outcome framework of causal inference and debiased recommendation literature. In addition, we also added extensive experiments to validate the superiority of our method compared to the "simple" feature imputation methods** (please kindly refer to our rebuttal). We will definitely put the above discussions and added experiments into our final version to fully address your concern! *** Could you please check whether they properly addressed your concern? If there are more remaining issues, we would appreciate the chance to address them and work towards achieving a higher score. We deeply appreciate all the insightful comments you have posted, as they have greatly enhanced our paper! With thanks and warm wishes, Submission9441 Authors *** **References** [1] Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational psychology 1974. [2] Neyman et al. On the application of probability theory to agricultural experiments. Statistical Science 1990. [3] Danyu Lin et al. Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics 1998. [4] Nicole Bohme Carnegie et al. Assessing sensitivity to unmeasured confounding using a simulated potential confounder. Journal of Research on Educational Effectiveness 2016. [5] Nathan Kallus et al. Confounding-robust policy improvement. NeurIPS 2018. [6] Wang Miao et al. Identifying effects of multiple treatments in the presence of unmeasured confounding. Journal of the American Statistical Association 2023. [7] Sihao Ding et al. Addressing Unmeasured Confounder for Recommendation with Sensitivity Analysis. KDD 2022. [8] Haoxuan Li et al. Removing Hidden Confounding in Recommendation: A Unified Multi-Task Learning Approach. NeurIPS 2023. [9] Zhiheng Zhang et al. Robust causal inference for recommender system to overcome noisy confounders. SIGIR 2023. --- Rebuttal Comment 2.1: Comment: Dear Reviewer TTAY, Since the discussion period will end in a few hours, we will be online waiting for your feedback on our rebuttal, which we believe has fully addressed your concerns. We would highly appreciate it if you could take into account our response when updating the rating and having discussions with AC and other reviewers. Thank you so much for your time and efforts. Your feedback would be extremely helpful to us. If you have further comments or questions, we hope for the opportunity to respond to them. Many thanks, Submission9441 Authors
null
null
Rebuttal 1: Rebuttal: Dear reviewers and AC, We sincerely thank all reviewers and AC for your great effort and constructive comments on our manuscript. During the rebuttal period, we have been focusing on these beneficial suggestions from the reviewers and doing our best to add several experiments. As reviewers highlighted, we believe our paper tackles an important and relevant problem (**Reviewer TTAY**, **Reviewer X846**), and introduces a novel and interesting idea (**Reviewer X846**). We also appreciate that the reviewers found our paper well-organized (**Reviewer X846**) and offers solid and convincing experiments (**Reviewer TTAY**, **Reviewer X846**, **Reviewer Yf6Q**). Moreover, we thank the reviewers for the suggestions for incorporating feature imputation methods (**Reviewer TTAY**), as well as pointing out the concerns regarding the empirical performance improvement (**Reviewer X846**, **Reviewer Yf6Q**), and time and space efficiency (**Reviewer X846**). In response to these comments, we have added the following experiments: - [Reviewer TTAY] We **add experiments to compare the feature imputation methods with proposed MetaDebias** (in Table 1 and Table 2). - [Reviewer X846 and Reviewer Yf6Q] We **add experiments on additional PCIC and Ali-CCP datasets to further validate the effectiveness** of MetaDebias (in Table 3). - [Reviewer X846] We **add experiments to investigate the time and space efficiency** on the **Coat**, **Yahoo! R3**, and **KuaiRec** datasets (in Table 4). We hope our response could address all the reviewers' concerns, and are more than eager to have further discussions with the reviewers in response to these revisions. Thanks, Submission9441 Authors. Pdf: /pdf/407802b8f7eca1514c94b16bf2cc25aaf6cefb0f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Noisy Dual Mirror Descent: A Near Optimal Algorithm for Jointly-DP Convex Resource Allocation
Accept (poster)
Summary: The paper studies a class of convex resource allocation problems, in which the utilities and constraints are private (and bounded). The paper proposes a simple algorithm that applies mirror descent to the dual problem (while the update of the primal variables is assumed to be exact). The main technical result is an improvement of the utility bound (improves dependency on number of constraints) Strengths: - The paper is well-written, and the technical results are interesting (though I did not carefully review proofs) - This class of convex problems (with private utilities and constraints) is well motivated and seems relevant in practice. - There is a concrete improvement by a factor of m in the utility upper bound for this class, under strong duality assumptions. Weaknesses: - Algorithm 1 assumes that the maximization problem (line 3) is solved exactly. This might be realistic for a subclass of problems (for instance when utilty and constraints are linear -- and I suppose this is true in experiments), but the analysis considers a much more general class, for which exact updates are generally not possible (so I find Table 1 to be a bit misleading about this). For the analysis to be complete, this would require allowing inexact updates in line 3. Can the authors work out the details in this case (please explain in the rebuttal)? Otherwise this should be clearly stated as an assumption and reflected in Table 1. - Near optimality: The lower bound assumes that the solution is feasible, while the utility upper bound is for infeasible solutions. I don't think you can reasonably conclude that "noisy mirror descent is near optimal" (line 258). It should be made clear that optimality is still very much open. The title should be revised too. - Dependence on $\underline{b}$ in Theorem 3.10: it seems one is trading-off a $\sqrt m$ utility improvement for a $1/\underline b^2$ term (both in utility and feasibility) which could be arbitrarily bad depending on the problem instance. Is this unavoidable? Can the authors comment? Minor: - I am not sure how meaningful the "compulsory requests" relaxation is, as this requires some restriction on $p^*$, and comes at the expense of simplicity/presentation. - the introduction mentions personalized recommendation as a motivating example. Can the authors provide more details or references? How is this problem modeled as a constrained resource allocation problem? - Theorem 3.5 and 3.6 are written in terms of $p^{(1)}$. Presumably the bounds in Table 1 use a specific choice of $p^{(1)}$. It would be good to comment on that. - Using $u$ both for utility functions and "violation levels" (line 183) is confusing. Please choose a different notation. - In Condition 3.1, I believe you technically want to require differentiability on relint W, and not on W (otherwise your negative entropy examples would not satisfy this -- please comment on whether this is problematic in your proofs). - Some quantities are not defined (even if standard, please make sure to define them to make the paper self-contained), for example the $-i$ notation and strong convexity wrt $\|\cdot\|_p$. - Assumption 2.4.3 should be rephrased. Maybe "the constant $\gamma$ in (4) is assumed to be ..." Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Additional limitations should be discussed: - The near optimality claim is not fully justified (given the feasibility assumption in the lower bound). - The algorithm and analysis assume primal updates can be done exactly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer YdYP, Thank you very much for your thorough reading and for providing insightful comments. We will respond to your concerns below one by one. We are more than happy to answer follow-up questions. **Weaknesses** - **Response:** Thank you for this sharp observation. Yes, you are right that we have to assume the maximization problem (line 3) is solved exactly. The exact solution is necessary for invoking Danskin's Theorem, which indicates that the vector $\bf{g}^{(t)}$ defined in line 5 is a gradient of the dual problem at $p^{(t)}$. The experiments only examine linear problems, which has closed-form solutions, i.e., exact solution can be obtained; so no issues for the experiments. For more general problems considered when developing theory, we believe it is not a big issue neither. Because the maximization problem is a concave problem (note that both $F(\cdot)$ and $-a(\cdot)$ are concave), it can be solved to $\epsilon$-optimality, $\epsilon\rightarrow 0$. That says, in a limiting sense, the maximization problem can always be solved exactly. Then, we can invoke Danskin's Theorem for gradients, and all proofs remain valid. However, if no exact solutions, then Danskin's Theorem is no longer valid, and the vector $\bf{g}^{(t)}$ defined in line 5 might not be a (sub)gradient; and all proofs become flawed. We think it needs an overhaul in the proof for this case, and therefore may leave it for future study. Considering that, we will highlight "solve exactly" as an assumption in Table 1. Thank you very much for pointing out this. - **Response:** Thanks. Other reviewers raised similar concerns, so we launched a response to all reviewers on this concern, please refer to that for more details. Before talking about "optimality", we need to clarify the way we evaluate algorithm's performance: we actually treat the sum of utility bounds and constraint violations as the ultimate upper bounds. In other words, when reading Table 1, we should look at the last two columns together. Constraint violation means resources are in shortage; but purchasing more resources from an emergency supplier costs extra money. So practically, the suboptimality in utility + money spent for getting extra resources should be the ultimate performance measure. We fully understand your concern, and we also believe an algorithm without violation should be the true optimal algorithm. So we decide to clarify our interpretation of optimality in the abstract, after Table 1 and after Theorem 3.10, 4.3. Thank you again for this valuable comment. - **Response:** Thank you for another sharp observation. First of all, we found the term $1/\underline{b}^2$ can be improved to $1/\underline{b}$. This has been updated to the revised manuscript. Second, there is indeed a trade-off between an improvement of $\sqrt{m}$ and an additional factor $1/\underline{b}$. To our best knowledge, the $1/\underline{b}$ term seems unavoidable. This term has been consistently observed in (non-private) resource allocation literature that involves analysis through primal-dual relationships, see [1-3]. Some papers interpret $\underline{b}$ as a sign of unbalance within resources, and argue that if $\underline{b}$ is very small, then no algorithm can achieve good performance [1]. In our specific case, the term $1/\underline{b}$ appears around line 509 when bounding $p^*$ under strong duality, which is finally carried to utility and feasibility bounds. So it seems inevitable. However, because $\bf{b}$ appears in the r.h.s of constraints, in practice, $\underline{b}$ is unlikely to be an arbitrarily small value (though possible in theory). The references [1-3] we provide below consistently observe a satisfying performance in experiments, despite the hidden drawbacks you raised. Ultimately, whether the trade-off matters depends on what assumptions we make on $\bf{b}$. For us, we treat it as a constant. Hope our response suffices. **Minors** - **Response:** In terms of modeling, it gives more flexibility; for example, physically-challenged students **must** be assigned an accessible seat. The condition just ensures normal students do not take away the seat from them. Technically, it facilitates the proof for lower bounds: the constructed datasets contain compulsory requests. Ultimately, this is a 'relaxation', so we think it should be fine. - **Response:** Thank you. You may find examples and references in [4] helpful. Some network revenue management problems involving personalized recommendations and limited inventory can be modeled as resource allocation problems. - **Response:** Thanks for alerting us of this. We indeed presume specific $p^{(1)}$ for Table 1. We also notice that the connection between Theorem 3.5, 3.6, 3.10 and Table 1 is somewhat weak and vague. To enhance the connection, we will add the table in the attached PDF file to our main text. The table illustrates what $p^{(1)}$ is used for what bounds. - **Response:** Thanks. We now use $\bf{v}$ to denote violation levels. - **Response:** Thanks. Yes, we want differentiability in the interior of $\mathcal{W}$. The modification does not cause problems in our proofs. Because proofs are built on results from mirror descent literature. With differentiability in int($\mathcal{W}$), our assumptions align with those for mirror descent. - **Response:** Thanks for your thorough reading. These notations are properly introduced now. - **Response:** Thank you again for your thorough reading. The poor presentation of Assumption 2.4.3 is a result of formatting the paper to meet the 9-page limit requirement. With one more page allowed now, we have rephrased this. [1] A dynamic near-optimal algorithm for online linear programming. OR, 2014 [2] Dual mirror descent for online allocation problems, ICML, 2020 [3] Online linear programming: Dual convergence, new algorithms, and regret bounds, OR, 2022 [4] An improved analysis of LP-based control for revenue management, OR, 2024
Summary: The paper addresses the allocation problem under Joint Differential Privacy within the framework of a convex consumption function, a concave utility function, and a convex 'personal' domain. The contributions of this work are threefold. Firstly, it derives results similar to those in previous research, but notably, it does so using only weak duality. Secondly, the paper improves the previously established optimality gap by leveraging strong duality. Lastly, it establishes a lower bound for the Joint Differential Privacy allocation problem, specifically for algorithms whose outputs are feasible allocations, within a certain regime of the epsilon parameter. Strengths: The work effectively distinguishes between the capabilities of weak and strong duality. In the case of strong duality, it achieves a significant improvement by reducing the dependency on the quantity of resources $m$ by a square root factor. This improvement, however, comes with the introduction of polylogarithmic dependencies in both the duality gap and the constraint violation. The authors also present a lower bound for the Joint Differential Privacy (JDP) allocation problem under the assumption of strong duality. Weaknesses: The text could benefit from increased clarity in certain areas. For example, it is not immediately clear in the abstract that the lower bound is derived using the strong duality assumption. While this is clarified later in the paper, it would be beneficial for the abstract to be more specific on this point. Additionally, it is not clear to me in what sense the lower bound is near matching the upper bound. It seems to me that there is an extra $m$ factor in the lower bound, which makes the difference not merely polylog. Am I missing something? Technical Quality: 3 Clarity: 3 Questions for Authors: It caught my attention that the upper bound is for an algorithm that can violate the constraints, but the lower bound is for one that always output feasible allocations. Is it significantly more difficult to analyze a lower bound for the first type of algorithm (fixing a range of constraint violation)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have adressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 8BYy, We want to express our gratitude for your time in reviewing our work and the valuable comment you sent to us. Below, we reply to your concerns one by one. Feel free to let us know any follow-up questions. **Weaknesses** **Response:** Thank you very much for the suggestions regarding readability. We fully agree with you and have revised our paper accordingly. Now, the abstract clearly indicates the lower bound is derived under strong duality. As for how bounds match, we need to clarify what is the upper bound first. We basically treat the (perhaps weighted) sum of the optimality gap and constraint violations as the overall upper bound. This is reasonable as optimality gap bounds are achieved by solutions that violate constraints. Intuitively, constraints being violated means more resources are allocated to agents, and it makes sense in this situation, the utility $F(x^\mathcal{A})$ is closer to the optimal total utility $F(x^*)$ from below. In an extreme case, if $x^{\mathcal{A}}$ violated constraints too much, the value $F(x^\mathcal{A})$ might even be greater than the optimal $F(x^*)$ whose $x^*$ is feasible. The example here suggests that when evaluating algorithms that may violate constraints, we must take into account constraint violations. So when reading upper bounds in Table 1, we'd better consider optimality UB and constraint violations together, taking the sum of the two as the ultimate upper bound. If bounds are understood in this way, then our lower bound matches upper bounds up to log factors. Hope this addresses your concern. Nevertheless, we found that our current presentation fails to highlight this interpretation. To make things clear, we decided to clarify this with a new paragraph, **Our interpretation of algorithm performance**; you may find the content of this paragraph in our response to all reviewers. Thank you very much for the comments. And we are very happy to take further questions. **Questions** **Response:** With our preceding response, we hope the raised concern in your first sentence is well-addressed. Therefore, we move to the second sentence "is it significantly more difficult to...?" The short answer is "No", it is not significantly more difficult to analyze the lower bound for algorithms given a fixed value of constraint violation. The fixed value of constraint violation can be treated as more resources, then the current proof can still go through after slight modifications. Specifically, the constructed datasets $\mathcal{D}_0$ and $\mathcal{D}_1$ in the proof of Theorem 4.3 should now be modified to contain more compulsory requests that consume the additional resources allowed from violations. Remaining proofs then follow. But the valid range of $\varepsilon$ may need to adjust accordingly, as indicated in step 4 of our proof. Overall, whether to investigate the lower bounds of an algorithm outputting feasible or infeasible solutions still depends on the way by which we assess algorithms for constrained problems. If constraint violation is allowed, we must take violations into account and give a proper interpretation. For upper bounds, interpreting violations as additional penalties makes sense, in our opinion. But for lower bounds, we cannot come up with a reasonable explanation so we only focused on feasible algorithms. In sum, thank you very much for your valuable comments and suggestions. Hope our responses address your concerns, and we are willing to take follow-up questions, if any. --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: I want to thank the authors for answering my questions. I find them satisfactory and it helped me understand the performance of their algorithm much better.
Summary: The submission studies jointly differentially private algorithms for resource allocation problems, which are a broad generalization of packing linear programs. The work addresses this challenge by considering a primal-dual formulation of the problem, and running a noisy mirror descent algorithm on the dual. Standard analyses of the mirror descent method and the impact of noise addition then provide bounds on the suboptimality and approximate feasibility of the method. The most interesting result is that under strong duality, the approach yields bounds which are polylogarithmic on the number of resources. The intuition behind this result is that strong duality provides bounds on the magnitude of an optimal dual solution. This suffices to more accurately locate the dual fesible region, which allows the use of entropy regularization for the noisy mirror descent method. The paper also provides lower bounds, which yield some insight on the possible near optimality of the method. Strengths: 1. The generality of the class of allocation problems considered. 2. The insights derived from duality and the mirror descent algorithm. 3. Numerical results are encouraging. Weaknesses: 1. I am confused about how the lower bound of Theorem 4.3 compares to the upper bounds. The upper bounds of the paper provide in-expectation guarantees for both suboptimality and infeasibility; moreover, the latter is quantified in an $\ell_{\infty}$-sense (which I think it's the right approach in this case). By contrast, the lower bound is expressed for algorithms which are almost surely feasible, leading to the expression that only involves the objective. Since the upper bounds do not contain linear factors in m, I don't see how this lower bound is optimal (or that it gives insight on the efficiency of the upper bound). 2. I also left some more specific comments in the questions section. I believe clarifying these it is very important for the submission to be publishable (and I also think these should be easy fixes). Technical Quality: 3 Clarity: 3 Questions for Authors: MAJOR QUESTIONS: 1. About point 1 in weaknesses, it would be more consistent if the authors try to: a) Provide high-probability results for the constraint violation upper bounds (this should be easy, using the high probability tail bounds of Gaussian noise addition). b) Provide lower bounds for joint DP resource allocation problems. 2. About the upper bounds without strong duality, it is unclear to me how the set $\mathcal{W}$ should be chosen in this case. I think that under quadratic potential they can just let $\mathcal{W}$ to be the whole space, but please clarify. MINOR QUESTIONS: 1. In page 3, I don't understand why is it claimed that the dual may not be convex. The dual objective is written as a maximum of functions which are affine in p, so it should always be convex. 2. The paper does not accurately represent the work done on differentially private versions of mirror descent. The authors should make a more thorough search about this topic, in order to put their work in context. 3. If I understand correctly, the vector $b$ represents the resource capacities, which it is reasonable to expect them to be public information. Either way, I believe the comment right before theorem 3.10 comes way too late, and the discussion about what is private and what is public it should be brought up much earlier. 4. The paragraph right after Theorem 3.10 about "purchasing resources" I also found it very confusing. Please make efforts to clarify it. 5. The last comment at the proof ot Theorem 3.6 should not be in the appendix (neither in the proof). This is a more practical consideration that does not belong to the proof, and it should be in the main file instead. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer kLe8, We sincerely thank you for reviewing our work and for your valuable feedback. Below, we respond to the concerns you raised one by one. Please feel free to let us know if you have any further follow-up questions. We are more than happy to take them. **Weaknesses** 1. **Response:** Thanks for the comment. We want to first express our sincere apology for a typo in the lower bound (Theorem 4.3): the bound is supposed to be stated for an in-expectation gap $F(x^*) - E_\mathcal{A}[F(x^{\mathcal{A}})]$. But somehow we missed out the expectation symbol in the main text, though we indeed did the proof for in-expectation guarantee (see line 605). We feel sorry for any confusion caused, and the typo has been fixed. Regarding lower and upper bounds. When reading upper bounds, we should read optimality gap bounds and constraint violation bounds together, because the optimality gap bound itself "_does not reflect the whole picture_" (line 180). As we discussed in our response to all reviewers, we therefore treat the (perhaps, weighted) sum of both gaps, i.e., optimality gap + constraint violations, as the ultimate upper bounds in our mind. The idea here follows our "purchasing resources" statement you mentioned in your next comment, and it admits a social welfare interpretation: the loss in total social welfare is the sum of (i) loss in total utility of agents and (ii) the decision maker's expenditure on extra resources. Nevertheless, we agree with you and think our current presentation fails to make this interpretation clear. We thus decided to bring this interpretation up to the Introduction as a new paragraph **Our interpretation of algorithm performance**. The content of this new paragraph can be found in our response to all reviewers above. We sincerely thank you again for this valuable comment. 2. **Response:** Thank you very much. More replies coming below. **MAJOR QUESTIONS** 1. **Response**: We feel sorry again for missing out the expectation symbol. All our bounds are consistent in the sense that they are for in-expectation performance. As for high-probability bounds, it is an easy fix for optimality gap bounds, but we believe for the constraint violations, it is not easy. This is because the constraint violation bound is derived from a sandwich method. Both sides of the sandwich involve Gaussian random vectors. And the lower part of the sandwich appears to be harder for this purpose, because of the term $\sum_t <n^{(t)}, p^{(t)}>$ around line 481. In general, $p^{(t)}$ is not assumed to be bounded, and thus we may need to bound it first before employing any Gaussian tail bounds. However, as $p^{(t)}$ depends on all past $n$ and $p$, it is therefore hard to characterize, and may need an iteration-by-iteration check. At this moment, we don't have a clear idea of how to do this (by martingale process maybe?). But we will try our best and report any successful attempts. In contrast, our in-expectation bound takes advantage of the zero-mean of $n^{(t)}$ and independence between $n^{(t)}$ and $p^{(t)}$, so this sum term is zero, and proofs can go through. 2. **Response:** Thanks for the question. Yes, for the case without strong duality, the quadratic function works. We also found the current presentation fails to make the choice of potential functions clear (they are scattered among theorems). To make things clear, we decide to add the table in the attached PDF to our main text. The table shall clarify what potential functions and initial points should be chosen, and what the corresponding performance bounds are. **MINOR QUESTIONS** 1. **Response:** Yes, we concur. The dual is always convex. We have fixed this. 2. **Response:** Thanks for this suggestion. We have added discussion on noisy mirror descent and its applications in the Introduction, and found our work a position in the context. In short, noisy mirror descent is usually used for private stochastic convex optimization and saddle point problems, and is applied to a primal problem. In contrast, we apply noisy mirror descent to the dual problem and use it for resource allocation. 3. **Response:** Yes, you are right, the vector $\bf{b}$ is public info. We agree with you and added explanations in the Introduction on what information is public/private. Now, it should be clear from the Introduction that only $z_i$ is private data; other info is public. Thank you for this suggestion. 4. **Response:** Thanks for this suggestion. We have added a discussion to clarify this. As this is related to point 1 you mentioned in Weaknesses, we hope our response there has cleared your concern. Please feel free to let us know if we have made it clear to you. We will be more than happy to provide follow-up explanations. 5. **Response:** Thank you. We agree with you. The discussion now is brought to the end of Section 3.1, immediately after Theorem 3.6. --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: First, I would like to thank the authors for their clarifications and thorough work on improving the paper and the results. I generally agree with the authors' answer and proposals, and I am raising my score. there are still two things I find are worth commenting on: 1. About the efficiency measure. While I understand that there is a welfare interpretation that justifies merging the suboptimality gap and constraint violations, I think there is still value in stating results in a bi-criteria sense (i.e., separately addressing the suboptimality and $\ell_{\infty}$ constraint violation). This for various reasons: a) Utilities and constraints could be in different scales (money transfers cans equalize scales, but it is still s.t. which may be specific of the application). b) The polylog upper bounds look far less exciting when they are added (and some information is lost IMO). All in all, my suggestion is that you keep both results. 2. About the high-probability constraint violations. That's interesting! Thank you for the clarification. If the only issue in this sandwiching is the term $\sum_t <n^{(t)}, p^{(t)}>$, I am possibly wrongt about this, but at this point you are already constraining the dual variables $p$ to lie on the simplex, and since they are predictable (and bounded) it should not be too difficult to upper bound this sum w.h.p. (usiing standard martingale arguments). There are other nontrivial bounds happening in this proof, so I am not sure whether this is enough. BTW, I think it's OK if you don't address this for the submission. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much for raising the score. And we are happy to know our response cleaned your concerns. As for the efficiency measure, we agree with you and also believe keeping both results would be more informative. So we will keep both. Last, thank you again for all your valuable suggestions throughout the review process. Thank you!
null
null
Rebuttal 1: Rebuttal: Dear Reviewers kLe8, 8BYy, YdYP, We sincerely thank you for your time in reviewing our work and in providing valuable feedback. We would like to initiate a global response to a common concern raised by all of you: - Given that upper bounds are for an algorithm that can violate constraints, but the lower bound is for one that always outputs feasible solutions, how does the lower bound compare to upper bounds, and why do we claim the proposed algorithm to be "near optimal"? Before talking about "optimality", we need to clarify the way we evaluate algorithms' performance. **We essentially treat the sum of utility bounds and constraint violations as the ultimate upper bounds** in our mind. Because the algorithm may output infeasible solutions, solely looking at its utility upper bounds is not reasonable. That is to say, when reading performance bounds in Table 1, we should look at the last two columns together. Failing to do so may lead to false conclusions. To see why, please take a look at the table in the attached PDF. As indicated by the first two rows (or third and fourth rows), with different initial point $\bf{p}^{(1)}$, performances vary a lot. When $\bf{p}^{(1)}\rightarrow \bf{0}$, the utility gap bounds are even smaller, but this comes at huge constraint violations (so, we should not conclude that $\bf{p}^{(1)}\rightarrow \bf{0}$ is a better choice) For this reason, we use the sum of utility bounds and constraint violations as the ultimate performance measure (this idea admits a social welfare interpretation, see text in the block below). In this sense, our proposed algorithm is near optimal. (On a side note, [HHRW16]'s algorithm is not necessarily near optimal for general convex cases, because their analytic results are only valid for linear problems due to a supporting Lemma they invoked. We will clarify this in the revision.) We found that the presentation in our initial submission failed to stress the performance measure we used. To this end, we will - (i) Add the following paragraph to Introduction > **Our interpretation of algorithm performance** Because algorithms considered may output infeasible solutions, when assessing their performance, we should take into account both suboptimality in utility and constraint violations, i.e., the last two columns in Table 1. We, therefore, treat the sum of them as the ultimate performance. This idea admits a social welfare interpretation: when the central decision maker (e.g., a government) desires to implement an infeasible allocation, she may purchase additional resources from an emergency supplier to make the allocation feasible. Then the total loss in social welfare is the (perhaps, weighted) sum of (i) the loss in utility of agents and (ii) the decision maker's expenditure on extra resources. Besides, Table 2 suggests that initial points affect performance significantly. If we only look at suboptimality, we may mistakenly conclude that the gap could be arbitrarily small. Following our interpretation of performance, our proposed algorithm is near optimal. - (ii) Add the table in the attached PDF to the main text. We hope our response addresses your concern. Below, we will reply to each one of you individually. Thank you very much again for all the feedback and comments. Yours, Authors of Submission 3277 Pdf: /pdf/1632ce305528644856dd1d2fac89d18ec85859a3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks
Accept (poster)
Summary: The paper discusses the role of initial membrane potential (IMP) of neurons in spiking neural networks (SNNs). The authors found that IMP has a significant impact on the firing patterns of LIF neurons. Then, they propose a learnable IMP mechanism to improve the performance of SNNs. Additionally, the paper introduces the last time step (LTS) approach to enhance convergence speed, and proposes a label smooth temporal efficient training (TET) loss to improve training. The empirical experiements on image classification tasks show the effeciveness of the propsoed methods. Strengths: 1. The idea of learnable IMP is simple and intuitive, and easy to be implemented in computer simulations. 2. The paper is written in a very straightforward way, making it easy to follow. 3. The simulated experiments are comprehensive on image classifications tasks, showing consistent performance gain by using learnable IMP and additional techniques compared to vanilla SNNs. Weaknesses: 1. One major concern is about whether learnable IMP can be implemented in neuromorphic chips since the IMP is a float number. To my experience, I do not believe it can be effectively implemented in current hardwares such as Loihi (and the whole paper does not discuss hardware implementation issues). If I am wrong, the authors please corrected me by comprehensively addressing this issue. If hardware implementation is hard or energy-ineffecient (so the main advantage of SNN disappears), then the proposed models should be compared with ANN models with similar model size (and apparently it is behind ANNs). 2. The title is info-less and not proper. The are a few problems. First, it should be the dynamics of neurons, not neuron networks, since the IMP is the property of individual neurons. Second, dynamics of neurons include various aspects, such as the model choice (LIF or Hudgekin-Hoxley model), noise term, etc. The readers obtain no information from "rethinking the dynamics", and other aspsects are not disucussed in the paper. A better title might be something like "learnable initial membrane potental enhances spiking neural networks". 3. By improving the dynamics of spiking neurons, it is reasonable to expect the model to be enhance in time series or sequential processing (e.g. NLP) tasks rather than image classifications tasks (since "dynamics" is temporal property). The lack of experiments on corresponding datasets making it unclear whether the proposed methods are effective only on CV tasks or in general. 4. When compared with baseline methods, the baseline methods should be shortly explained. Otherwise, it is unclear whether the comparisons are fair, and lack insights other than "our methods are good". Technical Quality: 3 Clarity: 3 Questions for Authors: - Can learnable IMP can be implemented in neuromorphic hardwares such as Intel Loihi? - Can the authors estimate the theoretical energy consumption like in other SNN papers? - How is the performance on other datasets such as time series or NLP? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The main limitation is about hardware implement, which the authors did not discuss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have carefully studied your comments and argue that your concerns can be addressed. > Weakness 1: One major concern is about whether learnable IMP can be implemented in neuromorphic chips since the IMP is a float number. To my experience, I do not believe it can be effectively implemented in current hardwares such as Loihi (and the whole paper does not discuss hardware implementation issues). If I am wrong, the authors please corrected me by comprehensively addressing this issue. If hardware implementation is hard or energy-ineffecient (so the main advantage of SNN disappears), then the proposed models should be compared with ANN models with similar model size (and apparently it is behind ANNs). **Answer:** IMP can be implemented on neuromorphic chips. Loihi supports setting the membrane potential (a floating-point number) to a non-zero floating-point value, and the LAVA (Loihi's toolchain) also provides APIs for setting IMP. ```python import numpy as np from lava.proc.lif.process import LIF # Create LIF Neuron with constant Initial Membrane Potential (IMP) imp = 0. lif = LIF( shape=(16,), # Number and topological layout of units in the process v=imp, # Initial value of the neurons' voltage (membrane potential). vth=1.0, # Membrane threshold dv=0.5, # Inverse membrane time-constant name="lif" ) # Create LIF Neuron with custom Initial Membrane Potential (IMP) imp = np.random.uniform(high=0.2, low=-0.2, size=(16,)) lif_imp_version1 = LIF( shape=(16,), # Number and topological layout of units in the process v=imp, # Initial value of the neurons' voltage (membrane potential). vth=1.0, # Membrane threshold dv=0.5, # Inverse membrane time-constant name="lif" ) ``` It is worth noting that in previous research, IMP has always existed, but due to the lack of relevant research on more appropriate settings, researchers usually set it to a floating-point value of 0. Apart from that, according to our results on the ImageNet test set (**Table R1**), IMP does not incur significant additional theoretical power consumption overhead, but can effectively improve the performance of SNNs. > Weakness 2: The title is info-less and not proper. The are a few problems. First, it should be the dynamics of neurons, not neuron networks, since the IMP is the property of individual neurons. Second, dynamics of neurons include various aspects, such as the model choice (LIF or Hudgekin-Hoxley model), noise term, etc. The readers obtain no information from "rethinking the dynamics", and other aspsects are not disucussed in the paper. A better title might be something like "learnable initial membrane potental enhances spiking neural networks". **Answer:** We appreciate your suggestions. As you mentioned, in this paper, we have investigated the dynamics of spiking neurons and its impact on the supervised learning and output representations of SNNs. We have taken your suggestions into consideration and propose to change the title to "Enhancing Spiking Neural Networks via Initial Membrane Potential and Optimized Objectives". > Weakness 3: By improving the dynamics of spiking neurons, it is reasonable to expect the model to be enhance in time series or sequential processing (e.g. NLP) tasks rather than image classifications tasks (since "dynamics" is temporal property). The lack of experiments on corresponding datasets making it unclear whether the proposed methods are effective only on CV tasks or in general. **Answer:** Thank you for the suggestion to further validate the effectiveness of our method. We have conducted additional experiments on time series tasks. By incorporating IMP in SNN-delay[1], we have achieved an effective improvement from 95.10% to 96.02% (**Table R2**). Nevertheless, it is important to note that the neuromorphic datasets we used in our experiments (page 8 **Table 2**), including CIFAR10DVS[1] and NCaltech101[2], contain temporal dynamics, rather than static images. [1] Li H, Liu H, Ji X, et al. Cifar10-dvs: an event-stream dataset for object classification[J]. Frontiers in neuroscience, 2017, 11: 309. [2] Orchard G, Jayawant A, Cohen G K, et al. Converting static image datasets to spiking neuromorphic datasets using saccades[J]. Frontiers in neuroscience, 2015, 9: 437. > Weakness 4: When compared with baseline methods, the baseline methods should be shortly explained. Otherwise, it is unclear whether the comparisons are fair, and lack insights other than "our methods are good". **Answer:** Due to the length constraint of the main text, we have provided the ablation experiments in the appendix (page 15 **Table 4**). We kept the experimental setup and hardware equipment completely consistent in the ablation experiments, except the parameters that needed to be compared. We hope that the results of the ablation experiments can address your concerns, and we will pay more attention to the explanation of our experimental results, providing more insights. > Question 1: Can learnable IMP can be implemented in neuromorphic hardwares such as Intel Loihi? **Answer:** Yes. Please refer to the response to Weakness 1. > Question 2: Can the authors estimate the theoretical energy consumption like in other SNN papers? **Answer:** Please refer to the response to Weakness 1 and the PDF we submitted. > Question 3: How is the performance on other datasets such as time series or NLP? **Answer:** Please refer to the response to Weakness 3 and **Table R2**. > Limitation 1: The main limitation is about hardware implement, which the authors did not discuss. **Answer:** Please refer to the response to Weakness 1. --- Rebuttal Comment 1.1: Title: Discussion Comment: Thanks for the responses! I see how you can set the IMP of neurons to different values in Loihi by "imp = np.xxxxx". However, my question is whether IMP can be learnable via e.g., surrogate gradient back-prop in neuromorphic chips (as the paper proposed "learnable IMP). --- Rebuttal Comment 1.2: Comment: Dear authors, This is a kind reminder that I asked a followed-up question and the due of author-reviewer discussion is near. I am still a bit confused about the detailed implementations of learnable IMP (since I have never considered adjusting IMP on neuromorphic hardwares), and more importantly I am curious about this. If there are any detailed instructions in the paper or appendix that I have missed, please just point them out. Given that you have well addressed my other comments and the other reviewers' comments, I am wiling to raise my score if this question can be resolved. --- Reply to Comment 1.2.1: Comment: Thank you for your response. The primary focus of this paper is on enhancing the performance of SNNs by learnable IMP, and ensuring that the trained networks can be deployed on neuromorphic chips for low-power inference. Currently, since neuromorphic chips [1] typically struggle to run standard surrogate gradient backpropagation [2,3], and synaptic plasticity-based approaches [4,5] often leads to lower performance,the performance of on-chip learning training SNNs still lags behind that of gradient-based methods [6,7]. Although some approaches have attempted to achieve on-chip approximate backpropagation [7-10], their training speed [11], network scale [12,13], and performance are still lower than standard surrogate backpropagation [2]. Therefore, in order to pursuit the SOTA performance and ensure energy efficiency, the related researches mainly focus on the approach of training high-performance SNNs [2,3,14,15] on GPUs and then deploying them on neuromorphic hardware [16]. At present, although we are unable to provide a new concrete on-chip learning method for the learnable IMP within such a short time, we hope the following approach can be useful for realizing on-chip learning IMP:(1) Use an auxiliary neuron to distribute IMP (by firing a spike) to the other neurons at the initial time step. (2) Optimize the synaptic weights of this auxiliary neuron to adjust IMP. In addition, we will further discuss on-chip learning in the limitations section of this paper and explore it in our future work. Thanks again for your questions of this work, and we hope our response can address your concerns. [1] Davies M, Srinivasa N, Lin T H, et al. Loihi: A neuromorphic manycore processor with on-chip learning[J]. Ieee Micro, 2018, 38(1): 82-99. [2] Neftci E O, Mostafa H, Zenke F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks[J]. IEEE Signal Processing Magazine, 2019, 36(6): 51-63. [3] Davies M, Wild A, Orchard G, et al. Advancing neuromorphic computing with loihi: A survey of results and outlook[J]. Proceedings of the IEEE, 2021, 109(5): 911-934. [4] Kheradpisheh S R, Ganjtabesh M, Thorpe S J, et al. STDP-based spiking deep convolutional neural networks for object recognition[J]. Neural Networks, 2018, 99: 56-67. [5] Zenke F, Ganguli S. Superspike: Supervised learning in multilayer spiking neural networks[J]. Neural computation, 2018, 30(6): 1514-1541. [6] Bal M, Sengupta A. Spikingbert: Distilling bert to train spiking language models using implicit differentiation[C]//Proceedings of the AAAI conference on artificial intelligence. 2024, 38(10): 10998-11006. [7] Yao M, Hu J, Hu T, et al. Spike-driven transformer v2: Meta spiking neural network architecture inspiring the design of next-generation neuromorphic chips[J]. arXiv preprint arXiv:2404.03663, 2024. [8] Kaiser J, Mostafa H, Neftci E. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE)[J]. Frontiers in Neuroscience, 2020, 14: 424. [9] Lillicrap T P, Cownden D, Tweed D B, et al. Random synaptic feedback weights support error backpropagation for deep learning[J]. Nature communications, 2016, 7(1): 13276. [10] Bellec G, Scherr F, Subramoney A, et al. A solution to the learning dilemma for recurrent networks of spiking neurons[J]. Nature communications, 2020, 11(1): 3625. [11] Shrestha A, Fang H, Rider D P, et al. In-hardware learning of multilayer spiking neural networks on a neuromorphic processor[C]//2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021: 367-372. [12] Renner A, Sheldon F, Zlotnik A, et al. The backpropagation algorithm implemented on spiking neuromorphic hardware[J]. arXiv preprint arXiv:2106.07030, 2021. [13] Shrestha A, Fang H, Wu Q, et al. Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks[C]//Proceedings of the International Conference on Neuromorphic Systems. 2019: 1-8. [14] Fang W, Yu Z, Chen Y, et al. Deep residual learning in spiking neural networks[J]. Advances in Neural Information Processing Systems, 2021, 34: 21056-21069. [15] Zhou Z, Zhu Y, He C, et al. Spikformer: When spiking neural network meets transformer[J]. arXiv preprint arXiv:2209.15425, 2022. [16] Ziegler A, Vetter K, Gossard T, et al. Spiking Neural Networks for Fast-Moving Object Detection on Neuromorphic Hardware Devices Using an Event-Based Camera[J]. arXiv preprint arXiv:2403.10677, 2024.
Summary: This paper investigates how the initial membrane potential affects the neuronal spike pattern and the model performace. The evolve of initial membrane potential would generate novel firing pattern and furthermore change the SNN output. Thus, by making the initial membrane potential a trainable parameter, the SNN can achieve significant improvement. Strengths: 1. This paper gives a detailed illustration that, by adjusting the initial membrane potential, additional pattern mapping can be generated, thus improving the expression capacity. So the proposed idea is natural. 2. This paper designs a specific label smooth loss function to effectively control the firing level, which is delicate and efficient. 3. Figure 4 shows that with a sufficient good membrane potential, one time step is enough for a good output, all these observations make the trainable initial membrane potential theoretical sound. 4. Enough experiments are done, which makes the proposed method more convinced. We can see with a trainable initial membrane potential, the accuracy are more satisfactory. Weaknesses: 1. In this paper, all experiments and illustrations are 4 time steps SNN. It remains non clear that in the long time steps task, how much could the initial membrane potential influence the output. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since we only use the output of the last time step to give an inference, I wonder if there is any information missing in this process. Have you ever considered using the last few time steps output instead of just one? 2. From another point of view, adjusting the initial membrane potential is just injecting a controlled noise into the membrane potential at first. Have the authors ever considered giving a controlled noise to each time step to improve the performance? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our research direction and the innovative approach we have employed. > Weakness 1: In this paper, all experiments and illustrations are 4 time steps SNN. It remains non-clear that in the long time steps task, how much could the initial membrane potential influence the output. **Answer:** To enhance the clarity and readability of the chapter content, we set T=4 to simplify the analysis process. It is worth noting that in the experimental section, we cover the cases of T values of 8, 10, and 16 (on page 8 Table 2). The experimental results demonstrate that IMP can maintain its effectiveness even at longer time steps (**Table R3** and **Table R11**). IMP also exhibits significant improvements in time series tasks, which can be seen in **Table R2**. | Model | Training Method | Spike Neuron | Time-step | Accuracy | |:----------:|:---------------:|:------------:|:---------:|:---------:| | Spikformer | TET | LIF | 64 | 97.57 | | Spikformer | TET | LIF + IMP | 64 | **98.26** | **Table R10: Accuracy on DVS128Gesture dataset.** > Question 1: Since we only use the output of the last time step to give an inference, I wonder if there is any > information missing in this process. Have you ever considered using the last few time steps output instead of just one? **Answer:** In static tasks, using only the last time step can indeed lead to information loss, but applying the same supervision label to multiple or all time steps may also make the model difficult to fit. There may be a tradeoff between information loss and fitting difficulty. LTS only uses the last time step for supervision and output representation, which can directly verify the hypothesis. Additionally, we believe that using the mean value of a few time steps' outputs instead of just one could be a good alternative to LTS, especially when time steps is relatively large. As mentioned in the A.2 Limitations section, the advantages of LTS tend to weaken as T increases, particularly when $T > 8$, as shown in Figure 8. > Question 2: From another point of view, adjusting the initial membrane potential is just injecting a controlled noise > into the membrane potential at first. Have the authors ever considered giving a controlled noise to each time step to > improve the performance? **Answer:** We have not yet tried related methods, this is a direction that remains unexplored. Adding controlled noise at each time step can be seen as a new method of membrane potential reduction/increase, which may have the potential to improve the capacity of the model. Similar methods like GLIF have achieved beneficial improvements by learning key parameters in the membrane dynamics. In summary, we appreciate your suggestions for future research directions. --- Rebuttal Comment 1.1: Title: Discussion Comment: Thank you for the rebuttal. I'd like to stick to my original score.
Summary: This paper analyzes the dynamics of membrane potential and proposes to improve performance by correcting the initial membrane potential to learnable parameters. This article proposes to use only the output of the last timestep as the classification feature during inference. In general, this paper proposes a simple but effective method. Strengths: 1. This paper provides a comprehensive and detailed analysis of the dynamics of membrane potential, which can inspire others. 2. The method proposed in this paper is simple but effective. Taking SEW-ResNet as the basic model, IMP and LTS both bring significant gains. Weaknesses: Although this paper is well-analyzed, it still has some weaknesses. It is not enough to just experiment with SEW-ResNet on a static dataset. If the residual technique is altered, such as MS-ResNet [1], the residual connection is calculated using the membrane potential. In this instance, will a better initialization of the membrane potential result in considerable gains? If the network design is altered, such as the Spikformer [2] with SEW residual connections and the Spike-driven transformer [3] with MS residual connections. Are the proposed IMP and LTS still valid? [1] Hu Y, Deng L, Wu Y, et al. Advancing spiking neural networks toward deep residual learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024. [2] Zhou Z, Zhu Y, He C, et al. Spikformer: When Spiking Neural Network Meets Transformer[C]//The Eleventh International Conference on Learning Representations. [3] Yao M, Hu J, Zhou Z, et al. Spike-driven transformer[J]. Advances in neural information processing systems, 2023, 36. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the DVS datasets, each timestep has its own unique information. Will LTS lose the information of the previous time step and cause unstable recognition accuracy? 2. Why does PSN lose dynamics and biological rationality? Aren't soft-reset SNNs dynamic? Biological rationality seems to have nothing to do with the reset method, and biological rationality is not important in deep learning. 3. Hard reset will cause the model to be unable to calculate in parallel. Is the method proposed in this article only applicable to Hard-reset SNNs? Can this method scale up to a larger timestep (for example, 64)? In language and autoregressive tasks, the timestep of SNN is aligned with the sequence length, which is extremely large. Using hard-reset SNNs may cause serious inefficiency. What do you think of this? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We first list your advice and questions, then give our detailed answers. > Weakness 1: If the residual technique is altered, such as MS-ResNet, the residual connection is calculated using the membrane potential. In this instance, will a better initialization of the membrane potential result in considerable gains? **Answer:** Yes. The effectiveness of our proposed IMP and LTS methods in membrane potential residuals on the MS-ResNet architecture can be evaluated on the ImageNet100 dataset. | Model | Method | Epoch | Acc1 | Acc5 | |:-----------:|:-------:|:-----:|:---------:|:---------:| | MS-ResNet18 | TET | 100 | 73.92 | 91.62 | | MS-ResNet18 | SDT | 100 | 74.78 | 91.98 | | MS-ResNet18 | LTS | 100 | 75.78 | 92.92 | | MS-ResNet18 | LTS+IMP | 100 | **76.12** | **93.36** | **Table R6: Accuracy on ImageNet100 dataset.** > Weakness 2: If the network design is altered, such as the Spikformer with SEW residual connections and the Spike-driven transformer with MS residual connections. Are the proposed IMP and LTS still valid? **Answer:** Yes, we have tried the SpikingResFormer[1] with self-attention mechanism and membrane potential residual connections. Due to the limitations of time and computational resources, we used fewer epochs compared to the original paper. Although LTS is slightly lower than SDT, it still outperforms TET, which is consistent with our analysis. Additionally, we have verified the effectiveness of the combination of self-attention mechanism and IMP on the CIFAR10DVS dataset (**Table R3** and **Table R8**). We hope these results can address your concerns. | Model | Method | Accuracy | SOPs(G) | Power(mJ) | |:-------------------:|:------:|:----------:|:-----------:|:-----------:| | SpikingResFormer-S | TET | 73.500 | 3.77187 | 3.39468 | | SpikingResFormer-S | SDT | **73.988** | 3.42255 | 3.08029 | | SpikingResFormer-S | LTS | 73.974 | **3.31618** | **2.98456** | **Table R7: Accuracy and theoretical energy consumption on ImageNet1k dataset.** | Model | Training Method | Spike Neuron | Time-step | Accuracy | | :--------: | :-------------: | :----------: | :-------: | :------: | | Spikformer | TET | LIF | 16 | 82.8 | | Spikformer | TET | LIF + IMP | 16 | **83.4** | **Table R8: Accuracy on CIFAR10DVS dataset.** [1] Shi X, Hao Z, Yu Z. SpikingResformer: Bridging ResNet and Vision Transformer in Spiking Neural Networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 5610-5619. > Question 1: In the DVS datasets, each timestep has its own unique information. Will LTS lose the information of the previous time step and cause unstable recognition accuracy? **Answer:** Yes, the LTS method can lead to information loss, especially in DVS tasks with a large number of time steps. We only suggest considering the use of LTS in static tasks, as the effectiveness of LTS relies on the assumption of having the same input at each time step. | Model | Training Method | T=4 | T=8 | T=10 | T=16 | |:----------:|:---------------:|:---:|:---:|:----:|:----:| |VGG| TET | 83.8 | 85.0 | 85.8 |86.4| |VGG| SDT | 83.4 | 84.3 | 84.4 |85.1| |VGG| LTS | 83.7 | 83.0 | 82.9 |82.3| **Table R9: Accuracy on CIFAR10DVS dataset (resize to 48).** > Question 2: Why does PSN lose dynamics and biological rationality? Aren't soft-reset SNNs dynamic? Biological rationality seems to have nothing to do with the reset method, and biological rationality is not important in deep learning. **Answer:** PSN is not a soft-reset dynamics, but rather a no-reset mechanism. PSN removes the reset mechanism and achieves parallelization through linear mapping or convolution (scanning) across the time dimension. But we agree with your point that biological rationality is not important in deep learning. We will modify the statement to: "It should be noted that the removal of the reset process in PSN means that the spiking activities in the previous time steps will not affect the membrane potential values in the subsequent time steps." and change the "Dynamic" item in the table to "Reset". > Question 3.1: Hard reset will cause the model to be unable to calculate in parallel. Is the method proposed in this article only applicable to Hard-reset SNNs? **Answer:** Our proposed methods and theoretical analysis do not conflict with the reset mechanism. IMP can generate new spiking patterns and pattern mappings for both Hard and Soft reset SNNs. LTS is designed specifically for static tasks, and the reset mechanism used in SNNs does not affect our hypotheses and theoretical results. > Question 3.2: Can this method scale up to a larger timestep (for example, 64)? **Answer:** Yes, IMP remains effective with larger numbers of time steps (T=64), as demonstrated by experiments on the CIFAR10DVS (**Table R3**),and IMP is also effective on time series tasks, as shown in **Table R2**. > Question 3.3: In language and autoregressive tasks, the timestep of SNN is aligned with the sequence length, which is extremely large. Using hard-reset SNNs may cause serious inefficiency. What do you think of this? **Answer:** We agree with your point that the hard-reset mechanism can lead to inefficiency and affect the performance of SNNs on time series tasks, as the information from previous time steps will be completely lost when hard-reset neurons fire. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Author's response addressed all my concerns. I will raise my score to weak accept.
Summary: presents a novel approach to understanding and modeling the dynamics of spiking neural networks Strengths: offering new insights into SNN modeling and potential applications in various domains. theoretical framework for SNN dynamics is novel and addresses existing limitations in the field. Weaknesses: Some of the assumptions in the theoretical framework could be more explicitly stated and justified. Additional experiments, particularly in real-world scenarios, could further validate the proposed methods. The introduction and related work sections could provide more context to better situate the contributions within the broader literature. Technical Quality: 3 Clarity: 3 Questions for Authors: Can the authors provide more details on the assumptions made in the theoretical framework? How do these assumptions impact the generalizability of the proposed methods?How does the proposed framework compare to other state-of-the-art models in terms of computational efficiency and scalability? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have addressed the limitations of their work to a reasonable extent. They acknowledge the assumptions made in their theoretical framework and discuss potential areas for future research. However, the discussion on the broader societal impact of the work could be expanded, particularly in terms of ethical considerations and potential negative consequences. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts in reviewing our article and providing constructive feedback. We’d like to reply to your concerns in detail. > Weakness 1: Some of the assumptions in the theoretical framework could be more explicitly stated and justified. **Answer:** We will provide a clearer explanation of our assumptions in the subsequent version and ensure that the proof process strictly follows the format specifications, in order to improve the accuracy and readability of the content. > Weakness 2: Additional experiments, particularly in real-world scenarios, could further validate the proposed methods. **Answer:** Thank you for the suggestion. We can provide the performance of our IMP method on the DVS128Gesture dataset, which captures human gesture movement trajectories using event cameras, thus being closer to real-world scenarios. Due to the current limitations in time and computational resources, we hope to try applying our method on larger-scale real-world scenarios dataset in our future work. | Model | Training Method | Spike Neuron | Time-step | Accuracy | |:----------:|:---------------:|:------------:|:---------:|:---------:| | Spikformer | TET | LIF | 64 | 97.57 | | Spikformer | TET | LIF + IMP | 64 | **98.26** | **Table R4: Accuracy on DVS128Gesture dataset.** > Weakness 3: The introduction and related work sections could provide more context to better situate the contributions within the broader literature. **Answer:** We agree with your suggestion. We will supplement the introduction and related work sections to ensure they can better highlight our contributions. > Question 1: Can the authors provide more details on the assumptions made in the theoretical framework? **Answer:** Sure. For static tasks, we define $f(\cdot)$ as the network computation, $\theta$ represents the network weights, $s[t]$ is the set of membrane potentials of all neurons in SNN with time step $t$, $x$ is the constant input intensity for time step $t=1,2,...,T$, and $y[t]$ is the corresponding output, the output of the SNN at each time step can be represented in the following form, $$ y[t] = f(x, s[t], \theta). $$ It can be found that the temporal variations of the output $y[t]$ is determined solely by the membrane potential set $s[t]$. Based on experimental observations, we have the following assumptions: **A1**: $s[t]$ is time-varying in SNNs. **A2**: The change in $s[t]$ can alter the output $y[t]$. **A3**: Applying the same supervised label to $y[t]$ across all time steps may lead to difficulties in SNN convergence. > Question 2: How do these assumptions impact the generalizability of the proposed methods? **Answer:** **A1** typically holds true, except in the absence of external input. For **A2**, these phenomena are commonly observed in experiments, although minor perturbations may not significantly alter the output of the SNN. For **A3**, it is important to note that for static image classification tasks, the correct results can be obtained without perfect fitting. Therefore, when the model has sufficient expressive power to output similar $y[t]$ for different $s[t]$, and the task is simple enough, the performance of TET can approach the level of LTS. > Question 3: How does the proposed framework compare to other state-of-the-art models in terms of computational efficiency and scalability? **Answer:** By combining our proposed method, the performance of SEW-ResNet can be close to the Transformer-based SNNs. However, compared to the current state-of-the-art Spiking Transformer models, there is still a certain gap (For more details, please refer to the PDF file we have submitted). Additionally, we hope to combine the proposed method with these advanced models, and explore its potential in a wider range of application scenarios. | Architecture | Param (M) | SOPs (G) | Power (mJ) | Accuracy | | :--------------------: | :-------: | :------: | :--------: | :------: | | Spikformer-8-384 | 16.81 | 6.82 | 7.73 | 70.24 | | Spikformer-6-512 | 23.37 | 8.69 | 9.42 | 72.46 | | Spike-driven 8-384 | 16.81 | - | 3.90 | 72.28 | | Meta-SpikeFormer | 15.1 | - | 16.70 | 74.10 | | SEW-R50-LTS (ours) | 25.56 | 3.10 | 2.79 | 71.24 | | SEW-R50-LTS+IMP (ours) | 36.67 | 3.45 | 3.11 | 71.83 | **Table R5: Accuracy and theoretical energy consumption on ImageNet test set.** > Limitation 1: The discussion on the broader societal impact of the work could be expanded, particularly in terms of ethical considerations and potential negative consequences. **Answer:** In this work, we found that adjusting the initial membrane potential can alter the spiking patterns of SNNs. Furthermore, in static tasks, the variation in SNN membrane potentials can influence the output. This finding may lead to the development of novel adversarial attack methods targeting SNNs. Attackers could achieve this by manipulating the membrane potential at specific future time steps. Such attacks would only require perturbation of the first few input frames, and could control the timing of errors in the SNN at a future time step. Compared to adding adversarial noise at all input time steps, this approach could be more stealthy.
Rebuttal 1: Rebuttal: Thanks for all reviewers' valuable comments. We are encouraged that reviewers recognize the effectiveness of setting learnable initial membrane potential states (IMP) for spiking neurons, and consider the idea of using the output of the last time step for supervised learning and output representation in static tasks to be interesting. Meanwhile, most of the reviewers concerned about the power consumption of the learnable IMP method, as well as the effectiveness of the IMP method in time series tasks. Our responses to these questions are as follows. ### Power Consumption IMP does not incur significant additional theoretical power consumption, but can effectively improve the performance of SNNs. More information can be found in the PDF we have submitted. | Model | Method | Accuracy | SOPs(G) | Power(mJ) | | :----------: | :-----: | :-------: | :---------: | :---------: | | SEW-ResNet18 | TET | 62.92 | 1.36055 | 1.22449 | | SEW-ResNet18 | SDT | 63.21 | 1.37418 | 1.23676 | | SEW-ResNet18 | LTS | 64.33 | **1.21427** | **1.09285** | | SEW-ResNet18 | LTS+IMP | **65.38** | 1.31371 | 1.18234 | | SEW-ResNet34 | TET | 67.98 | 3.59539 | 3.23585 | | SEW-ResNet34 | SDT | 68.10 | 3.52732 | 3.17459 | | SEW-ResNet34 | LTS | 68.10 | **3.11694** | **2.80525** | | SEW-ResNet34 | LTS+IMP | **68.90** | 3.12180 | 2.80962 | | SEW-ResNet50 | TET | 69.87 | 3.40181 | 3.06163 | | SEW-ResNet50 | SDT | 70.33 | 3.20071 | 2.88064 | | SEW-ResNet50 | LTS | 71.24 | **3.10432** | **2.79389** | | SEW-ResNet50 | LTS+IMP | **71.83** | 3.45325 | 3.10792 | **Table R1: Accuracy and theoretical energy consumption on ImageNet1k dataset.** ### Time Series Tasks We have conducted additional experiments on time series dataset Spiking Heidelberg Digits [1]. By incorporating IMP in SNN-delay [2], we have achieved an improvement from 95.10% to 96.02%, surpassing the current state-of-the-art ANN model Event-SSM [3], to our best knowledge. | Model | Network Architecture | Spike Neuron | Param | Accuracy | | :----------: | :------------------: | :----------: | :----: | :------: | | Event-SSM[3] | SSM(ANN) | - | ~400k | 95.50 | | SNN-Delay[2] | MLP+DCLS | LIF | 214.0k | 95.11 | | SNN-Delay[2] | MLP+DCLS | LIF + IMP | 214.5k | 96.02 | **Table R2: Accuracy on Spiking Heidelberg Digits.** The Spiking Heidelberg Digits (SHD) [1] dataset is an audio-based classification dataset of 1k spoken digits ranging from 0 to 9 in the English and German languages. The audio waveforms have been converted into spike trains using an artificial model of the inner ear and parts of the ascending auditory pathway. [1] Cramer B, Stradmann Y, Schemmel J, et al. The heidelberg spiking data sets for the systematic evaluation of spiking neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 33(7): 2744-2757. [2] Hammouamri I, Khalfaoui-Hassani I, Masquelier T. Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings[C]//The Twelfth International Conference on Learning Representations. [3] Schöne M, Sushma N M, Zhuge J, et al. Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models[J]. arXiv preprint arXiv:2404.18508, 2024. ### Long Time-Steps We further validated the effectiveness of IMP over longer time steps. | Model | Training Method | Spike Neuron | Time-step | Accuracy | | :--------: | :-------------: | :----------: | :-------: | :------: | | Spikformer | SDT | LIF | 32 | 82.6 | | Spikformer | SDT | LIF + IMP | 32 | **83.1** | | Spikformer | SDT | LIF | 48 | 81.5 | | Spikformer | SDT | LIF + IMP | 48 | **82.5** | | Spikformer | SDT | LIF | 64 | 81.1 | | Spikformer | SDT | LIF + IMP | 64 | **81.4** | **Table R3: Accuracy on CIFAR10DVS dataset.** Pdf: /pdf/84199887cd25f42d45d87830350b3a99ca16ed29.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Accept (spotlight)
Summary: LightGaussian introduces a three-stage technique to efficiently reduce the number of Gaussian primitives. In the first stage, redundant Gaussian primitives are pruned based on global significance, rather than opacity. The second stage involves SH distillation, which utilizes data augmentation from synthesized pseudo views. Finally, the third stage employs Vector Quantization of the SH coefficients. By incorporating these three approaches, LightGaussian achieves a remarkable 15x compression while maintaining competitive rendering quality. Strengths: 1. LightGaussian introduces a novel pruning strategy based on global significance, along with SH Distillation and Vector Quantization techniques, to effectively reduce both primitive redundancy and feature redundancy. 2. LightGaussian functions as a plugin and can be utilized in any GS representation model. 3. The authors conducted a comprehensive ablation study to demonstrate the effectiveness of the proposed method. 4. This method efficiently reduces the number of Gaussian primitives, leading to a significant improvement in rendering speed. 5. The paper is well-written and easy to follow. Weaknesses: 1. **Lack of necessary experimental analysis.** Although the authors compare LightGaussian with Compressed 3D-GS and Compact 3D-GS in Table 1, they fail to provide any analysis in the experimental results section, which is quite peculiar. Moreover, most of the experimental analyses focus on NeRF, which seems unnecessary. Furthermore, in Table 1, it is difficult to distinguish the visual quality performance and compressed capacity between LightGaussian and Compressed 3D-GS, and, strangely, there is no qualitative number provided for the FPS of Compressed 3D-GS. 2. **Lack of novelty.** This method appears to be a post-processing approach of 3D-GS and lacks essential characteristics, unlike Scaffold-GS [1], which proposes a hierarchical 3D Gaussian representation and only stores the information of anchors. It would be beneficial for the authors to compare LightGaussian with Scaffold-GS and its subsequent work, HAC [2], in terms of both experimental results and analysis. 3. **Poor layout formatting.** In Figure 7, the author incorrectly typeset the paper, leading to a compromised reading experience. By the way, the comparison in Figure 6 is not clearly distinguishable, as the eight images appear to be nearly identical. [1] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [2] HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Please provide a more detailed analysis of compression GS methods, such as Compressed 3D-GS and Compact 3D-GS, in the experimental section. In addition to comparing FPS, it would be valuable to include a comparison of training times as well. 2. It would greatly enhance the paper if the authors compare LightGaussian with Scaffold-GS and HAC in the experimental section for a more comprehensive evaluation. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discuss the limitation in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1]: Analysis over other methods? Why is FPS missing for Compressed 3D-GS?** Under a fair experimental setting, we observed that LightGaussian outperforms Compact 3DGS and Compressed 3DGS in 4 out of all 5 metrics on the MipNeRF360 dataset while also running fastest on Tank and Temples datasets. Notably, our method significantly boosts the FPS from 144 (3D-GS) to 237, a 64% improvement. We also surpass Compact 3DGS by 53% and Compressed 3DGS by 55% in FPS. LightGaussian performs similarly in overall visual quality while preserving thin structures slightly better than other methods, shown in the attached PDF. Please refer to the general response for more details. **[W2]: LightGaussian and Scaffold-GS** We integrated the Gaussian Pruning & Recovery into Scaffold-GS to prune up to 80% redundant neural Gaussians and improve the rendering efficiency. See general response for details. **[W4] Update Figure 6 and Figure 7.** We will revise the layout of Figure 7 and update Figure 6 with zoomed-in regions to improve the reading experience for both figures. **[Q1] FPS and training time comparison with Compressed 3D-GS and Compact 3D-GS** Please refer to the general response for training and inference efficiency. **[Q2] Comparison with Scaffold-GS and HAC.** Scaffold-GS proposes to initialize a sparse grid of anchor points from SfM points, and tethers a set of neural Gaussians with learnable offsets. This method constrained the distribution of 3D while achieving better reconstruction accuracy. HAC further explores a structured hash grid to exploit the inherent consistencies among unorganized 3D Gaussians that achieves remarkable compression ratio over 3D-GS. We demonstrate the application of LightGaussian upon Scaffold-GS in general response. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' sincere reply and effort. I agree with the performance of LightGaussian, thanks to detailed experiments, and all my concerns have been tackled. I will raise my score to 6. --- Reply to Comment 1.1.1: Title: Response from the Authors Comment: We are pleased that our responses have well addressed your concerns. We sincerely appreciate your willingness to raise your score from 4 to 6 and kindly note that it has not yet been updated in the system.
Summary: The paper focuses on compressing 3D Gaussian Splatting (3D-GS) models by mainly focusing on reducing the number of Gaussians and compressing the feature size of Gaussians. With three key steps: gaussian pruning and recovery, spherical harmonics distillation, and vector quantization, the paper achieves ~15x reduction in disk usage and ~2x faster rendering speeds with minimal loss in rendering quality. The effectiveness is verified on Mip-NeRF 360, Tank and Temple, and NeRF synthetic datasets. Strengths: * Compacting the storage cost and rendering speed of 3DGS with minimal loss of rendering quality is an interesting task and has a lot of applications such as rendering on VR devices. * The paper proposed a heuristic metric to evaluate the importance of each Gaussian, to help identify the less important Gaussian which are then pruned out to reduce the storage cost and accelerate the rendering speed. * The paper used knowledge distillation after reducing the dimension of the spherical harmonic features to recover the rendering quality. * The paper did detailed ablation studies on the three key steps and the sub-steps, plus different design choices and hyperparameters, which clearly shows the effectiveness of each components. Weaknesses: * The proposed vector quantization step lacks novelty, for example both [34] and [36] used vector quantization to compress Gaussian attributes, with minor differences in the method. * In comparison to the prior 3DGS compression works, the improvement is limited. As shown in Table 1, the improvement on the Mip-NeRF 360 dataset is not clear compared to [34], and the results are worse than [34] on the Tank and Temple dataset in terms of both rendering quality and model size. Besides, why the rendering speeds of [34] are not available? * The method requires at least two finetuning steps (Gaussian co-adaptation and VQ finetuning) for the compression, beyond the pre-trained 3DGS model, and each finetuning step requires the same iterations as pre-training. Prior work for example [36] can compress 3DGS in an end-to-end manner, and achieve similar performance. How long does the method take to compress one 3DGS model, and in comparison with the 3DGS compression baselines? In conclusion, the paper is not significantly better than the 3DGS compression baselines. The proposed two novel compression methods such as the Gaussian pruning by a heuristic metric and the knowledge distillation do not show considerable improvement over the baselines in terms of rendering quality, rendering speed, and storage cost. Technical Quality: 2 Clarity: 2 Questions for Authors: Please address the issues I pointed out in the 'weakness' section. Furthermore, I would appreciate clarification on the following questions in the authors' rebuttal. * Can authors provide the rendering speeds of [34] which are missing in Table 1? * In Table 2 are the reported FPSs for training (finetuning) or testing? If for testing why the FPS changes after some steps that should only affect the training procedure, for example from step [4] to [5]. Are they just noises? * In the experiments do you use a universal pruning ratio in the Gaussian pruning step, or is it scene-specific? * (Minor) In Algorithm 1 line 12 why use G' rather than G as the teacher? As the rendering quality of G' is slightly worse than G. Is it because G' is faster to render? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors provide a discussion of limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Vector quantization step lacks novelty** LightGaussian is motivated to design a holistic pipeline that effectively reduces the redundancy in the optimized Gaussians (NxC) for both the primitive count (N) and feature dimension (C). A large number of points will additionally result in slow rendering speed. The heavy attribute size for each primitive is mainly caused by the high-degree Spherical Harmonics used for preserving the view-dependent effect. Thus, LightGaussian proposes to identify the Gaussians that contribute the least to the training observations (images) to prune them away. This decreases the model size from 353MB to 116MB while increasing FPS from 192 to 303 (Tab.4). Further redundancy in the feature dimension is addressed through distillation to reduce the degree of Spherical Harmonics with pseudo-view augmentation to preserve the view-dependent effect. The model size is further reduced from 116MB to 77MB (Tab.4). Vector Quantization (VQ) is a general technique in image synthesis [1] and neural volumetric video representations [2]. We also utilize VQ with joint fine-tuning to further remove primitive redundancy. Overall, we achieve an average compression rate of over 15x while significantly boosting the FPS from 119 to 209, thanks to the holistic framework. We will moderate the tone of the contribution of Vector Quantization within our overall framework. **[W2] Improvement is limited? Improvement is not clear in Table 1. Why are the rendering speeds of [34] not available?** Please refer to the general response for the metrics under a fair experimental setting. Our method significantly boosts the FPS from 144 (3D-GS) to 237, a 64% improvement. We also surpass Compact 3DGS by 53% and Compressed 3DGS by 55% in FPS. **[W3] Generalization of LightGaussian? Time Comparison?** Please refer to the general response. **[W4] Improvement of Gaussian Pruning and SH Distillation?** We kindly argue that all three components contribute to both compactness and rendering efficiency. As shown in our ablation study (Table 2): pruning the least important Gaussians improves the FPS from 192 to 303 (+57%) while reducing the model size from 353MB to 116MB (-67%); SH compactness further reduces the model size to 77MB (-33%). We are hopeful that this new paradigm will disseminate valuable insights for efficient 3D learning. We respectfully reiterate another reviewer's comment regarding this point: "The idea of Gaussian Pruning & Recovery and SH distillation is novel, and achieves a better balance between the reconstruction quality, storage usage, and inference speed. " **(Review #bxja)**; “demonstrates great performance on real world scenes.” **(Review #3NHK)**; “LightGaussian introduces a novel pruning strategy based on global significance” **(Review #TPCr)** **[Q1] Rendering speeds of [34] in Table 1?** Please refer to the general response. **[Q2] Why FPS changes from step [4] to [5] in table 2** The evaluation is automatically performed after each compression step, rendering 1000 images and reporting the average FPS. Thus, slight discrepancies in each row are caused by the independent evaluation. For rows [7] and [8], where the 3D Gaussians are not further updated, we repeat the numbers from row [6]. **[Q3]: Universal pruning ratio, or scene-specific?** It is a universal ratio determined by experiments, as shown in Figure 5. **[Q4] Why use G' rather than G as the teacher?** They perform very similarly, with G’ being slightly faster. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed experiments and clarification, all my concerns are addressed, I will raise my score to 6. --- Reply to Comment 1.1.1: Title: Response from the Authors Comment: We are pleased that our responses have well addressed your concerns. We sincerely appreciate your willingness to raise your score from 4 to 6 and kindly note that it has not yet been updated in the system.
Summary: In this paper, the authors delivered a compact 3D Gaussian representation, i.e., LightGaussian for novel view synthesis. There are three technical contributions. Firstly, the authors present Gaussian Pruning and Recovery that measure the significance of each Gaussian to the view quality and then prune 3D Gaussian that has minimal impact on the visual quality. Secondly, the authors propose Spherical Harmonics (SH) distillation that condenses the information from higher-order coefficients to a more compact form. Thirdly, a Vector Quantization step is presented to reduce the redundancy in spatial and lighting representations. Strengths: i) The technology in this paper is very comprehensive, including the three most mainstream methods of compressing deep learning models, i.e., pruning, distillation, and quantization. Thus, this paper has great engineering significance and can inspire subsequent works of compressing 3DGS from three points of view. ii) The idea of Gaussian Pruning & Recovery and SH distillation is novel. Computing the score for each 3D Gaussian to represent its significance is fancy and reasonable. The insight of designing Eq. (3) as the score function is interesting. As for the distillation, few people pay attention to the learnable numbers, especially the SH coefficients. Distilling the high-order coefficients with pseudo views also provides a new idea of how to reduce the attributes / learnable parameters of 3D Gaussian. iii) The proposed LightGaussian achieves a better balance between the reconstruction quality, storage usage, and inference speed. To be specific, in table 1, LightGaussian yields improvements of 0.2 dB in PSNR, 81 fps in inference speed, and 6 MB in storage memory than the state-of-the-art method compact 3DGS on the Mip-NeRF 360 datasets. Meanwhile, LightGaussian achieves faster speed and small model size on the Tank and Template datasets with only a 0.2 dB drop. iv) The experiments are sufficient. The results in Table 2 clearly demonstrates the effect of each component. The experiments in Table 3, 4, and 5 study the variants of the three compressing techniques. The authors consider very comprehensive situations in these ablation study experiments. v) The static webpage and video demo in the supplementary are very attractive and Exquisite, which save a lot of time for readers to understand what the authors did in this work. Weaknesses: i) The story line is not coherent and the motivation may need more explanation. In particular, the three technical contributions, i.e., Gaussian Pruning and Recovery, SH Distillation, and Vector Quantization seem like three independent works. Combining them together without clear motivation make this paper like a technical report. Although I appreciate the work the authors have done, it would be better to improve the writing by establishing a coherent story line and adding clear motivation for the proposed methods. ii) It is better to add discussion and fair comparison with previous 3DGS compressing works such as compressed 3D-GS [34] and compact 3D-GS [36]. For example, what is the difference between the compress techniques? And what about the performance with the same baseline model? iii) Some technical details may require more explanation, for example, why designing Eq. (3) like this as the contribution score of each 3DGS? It is interesting to know the process and key insight of the designing process. How you think about this question? iv) For the main results in table 1, why the Compressed 3D-GS does not have the inference speed? And why LightGaussian is better than SOTA on the Mip-NeRF 360 dataset and performs worse on the Tank and Template datasets? It is better to add analysis and discussion here to explain this performance gap. Technical Quality: 3 Clarity: 3 Questions for Authors: I have a question for the presented SH distillation step Some existing works just use an view-independent component (also low-order SH coefficients) to represent the color of 3D Gaussian [1]. Did you ever try this method? It is interesting to have a discussion between the SH distillation with this option. [1] Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis. In ECCV 2024. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes, the authors have analyzed the limitations and broader impact of the method in section 4.4 of the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Rephrase the motivation.** We are motivated by the observation that the efficient point-based representation, 3D-GS, and its many follow-ups perform poorly in model size because they have to store each of the N (usually millions) points. A large number of points typically results in slow rendering speed. Additionally, we found a heavy attribute size for each primitive, especially due to the high-degree Spherical Harmonics used for preserving the view-dependent effect. This motivates us to design a holistic pipeline that effectively reduces redundancy in the Gaussians (NxC) for both the primitive count (N) and feature dimension (C). We start by identifying the Gaussians that contribute the least to the training observations (images) and prune them away. Gaussian pruning not only reduces model redundancy but also significantly increases rendering efficiency, as there are fewer Gaussians in the viewing frustum. Further redundancy in the feature dimension is addressed through distillation to reduce the degree of Spherical Harmonics with pseudo-view augmentation to preserve the view-dependent effect. Vector Quantization (VQ) is a general technique that we utilize with joint fine-tuning to further remove primitive redundancy. We hope our pipeline is general and can be used as a plug-in tool for many researchers in this field. We will highlight the motivation in the revised introduction. **[W2] Discussion and fair comparison with Compressed 3DGS and Compact 3DGS.** **Compact 3DGS** proposes a learnable mask strategy to reduce the number of Gaussians and utilizes a grid-based neural field to replace the spherical harmonics for a compact representation. **Compressed 3DGS** proposes a post-processing method that uses sensitivity-aware vector clustering with quantization-aware training to compress the 3D Gaussians. In contrast, **LightGaussian** proposes a visibility-based Gaussian Significance Score to identify the least important Gaussians and introduce a distillation strategy with virtual view augmentation to transfer knowledge from teacher to student with low-degree Spherical Harmonics coefficients. Please refer to the general response for a fair comparison. **[W3] Explanation over Eq.3.** The motivation behind Eq. 3 is that the densification in 3DGS is based on spatial gradients and can be noisy, as it recovers the full 3D representation from limited training views. This results in an overparameterized scene representation to match the training pixels, which can affect the rendering speed if too many Gaussians are in the viewing frustum. Our insight is to find a general formulation to consider the contribution of each 3D Gaussian to the training views, determined by whether they intersect or not (denoted by $1(G(X), r)$), the opacity, and the volume of a Gaussian and how it impacts the training pixels. **[W4.1] Why FPS for Compressed 3D-GS is missing?** It is because we reiterated the metrics from their original paper, which does not provide FPS metrics. We reran the fair comparisons in the tables in General Response. **[W4.2] Discrepancy of the performance between MipNeRF360 and Tank and Temple?** In the main draft (Tab. 1), we use the re-trained 3D-GS, which performs slightly differently from the reported results in 3D-GS. We also reiterate the results from the compared methods in our main draft, as they do not disclose the training details. We conduct fair comparisons to align with the methods, mentioned in general response. It can be observed that LightGaussian has significantly higher FPS while preserving similar visual quantitative metrics compared to the other methods. **[Q1] Applying view-independent RGB similar to X-Gaussian?** Thanks for the suggestion! X-Gaussian redesigns a radiative Gaussian point cloud model inspired by the isotropic nature of X-ray imaging, incorporating Differentiable Radiative Rasterization and Angle-pose Cuboid Uniform Initialization for X-ray scanners. However, unlike X-ray imaging, simply utilizing view-independent methods significantly decreases the PSNR from 27.40dB to 24.89dB (MipNeRF360 datasets), which is significantly lower than our method (27.13dB). We will include this discussion in the revision. --- Rebuttal Comment 1.1: Title: Response to the author rebuttal Comment: Thanks for your reply and effort. All my concerns have been addressed. I keep my original score. --- Reply to Comment 1.1.1: Title: Response from the Authors Comment: Thank you for affirming your positive view of our paper.
Summary: This manuscript presents a pipeline to drastically reduce the size of pretrained 3D Gaussian splatting models in a way that preserves novel view image fidelity and increases rendering speed. This pipeline consists of three parts: 1) pruning based on an introduced global significance score followed by fine-tuning, 2) reducing the spherical harmonic dimension of the Gaussians and using knowledge distillation to recover the lost specular information, 3) vector quantization of the spherical harmonics of gaussians below a certain global significance score threshold. The global significance score per Gaussian is accumulated over all pixel rays that that intersect that Gaussian and is the sum of the opacity of the Gaussian multiplied by a normalized volume of the Gaussian. Experimental results show a marked decrease in the model size and increase in rendering speeds while image quality is only slightly degraded. Strengths: S1: This manuscript is well written and easy to follow. Details are provided that should allow for the ability to replicate experiments. S2: The current size of 3D-GS models makes them unusable in low resource settings — such as VR/AR — and compressing these models will open up many new advancements in downstream applications. The proposed pipeline demonstrates great performance on real world scenes and will be a benefit to researchers working in resource constrained settings. Weaknesses: **Main weaknesses:** W1: I’m surprised that contribution per pixel ray is sufficient for an effective the global significance score. It seems like a per Gaussian score would need to account for both the Gaussian’s contribution to the input image pixels AND the location of those pixels in 3D space. A Gaussian observed by multiple cameras should hold more importance as it will be more likely to faithfully reconstruct the 3D scene than a Gaussian only observed by a single camera. W2: I’m also surprised that the global significance score doesn’t need to account for the exponential drop in contribution to the pixel color due to alpha blending from Gaussians in front of the Gaussian whose score is being computed. W3: Does the global significance score get updated for all Gaussians along a pixel’s ray or is it updated for only those Gaussians that contribute to the pixel color? I suspect it is the latter, but this should be clearly stated in the manuscript. W4: It’s surprising to me that recovering the higher degree spherical harmonic representation in the lower spherical harmonic degrees via knowledge distillation and sampled views is needed to maintain high image quality after spherical harmonic reduction. I’d expect that recovering the color via fine-tuning in the just the training views should work quite well and knowledge distillation and sampled views should be ablated against this. **Cosmetic issues:** Eq. 4: Volume should always be positive, so there shouldn’t be a need to take the max of normalized volume and 0. Line 200: “network” should be “model”, 3D-GS models aren’t networks. Technical Quality: 3 Clarity: 3 Questions for Authors: I’m rating this manuscript as “Accept”. I encourage the authors to engage with the weaknesses I’ve listed and will consider raising my score if they are addressed in the rebuttal. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This manuscript has adequately covered the limitations of the proposed pipeline. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1 & W2] Does the score need to account for the exponential drop in contribution to the pixel color? Is the score for each Gaussian computed using multiple cameras or a single camera?** We thank you for the insightful suggestions. With considering the “exponential drop” in Eq.3, we found the Gaussian Pruning metrics slightly improve on both indoor scene and outdoor unbounded scene. In the table below, we replace the Gaussian opacity $\sigma$ with $\sigma \cdot T$, $\alpha$, and $\alpha \cdot T$, where $T$ is transmittaance and $\alpha$ is parameterized by opacity, scale and covariance. We found the use of opacity perform similar with alpha, while considering the transmittance all slightly improve the pruning effectiveness. We will include a more detailed discussion in the revision. Eq. 3 considers the impact from all training cameras (denoted as M) to formulate the score for each Gaussian. | Scene Bicycle | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------------------|------------|-----------|------------| | opacity |25.09 | 0.7593| 0.2261| | opacity*transmittance | 25.17 | 0.7647 | 0.2196| | alpha |25.08 | 0.7591| 0.2256| | alpha*transmittance | 25.15 | 0.7630 | 0.2206| | Scene Room | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------------------|------------|-----------|------------| | opacity |31.38 | 0.9144 |0.2364| | opacity*transmittance | 31.39 | 0.9158 |0.2320| | alpha |31.35 |0.9135|0.2374| | alpha*transmittance |31.38 | 0.9158 |0.2317| **[W3] Does the score get updated for all Gaussians along a pixel’s ray or for only those Gaussians that contribute to the pixel color?** You are right, its’ the latter and we will clarify more about this in Eq.3 about the criteria 1(G(X_j), r_i) in the Sec.3.2. The intersection will stop when the accumulated opacity reaches 0.9999, same as the way in 3D-GS. **[W4] Why distillation is needed in converting high-order SH to low-order SH?** To validate the effectiveness of distillation with augmentation, we ablate the distillation design choices in rows [5] and [6] of Table 2. Additionally, we conducted another experiment on Scene Room (MipNeRF360) that only utilizes photometric loss from the training view to recover the model with low-degree Spherical Harmonics coefficients. The results indicate that simply fine-tuning with photometric error can recover most of the accuracy, while the use of teacher-student distillation with data augmentation can further slightly improve the rendering quality. The table below are based on new configuration mentioned in the general response. | Methods/Metrics | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------------------------------|--------|--------|---------| | Photometric Loss | 31.42 | 0.9157 | 0.2338 | Distillation + Pseudo-views | 31.48 | 0.9167| 0.2313 In the general machine learning domain, we also noticed literature [1] that studies the use of teacher-student distillation to recover full accuracy for the student. By utilizing "mixup" data augmentation to construct virtual training examples, the distillation process can "generate support points outside the original image manifold," which is beneficial for accuracy recovery. **[W5] Cosmetic issues?** We will revise the draft accordingly. **Reference** [1]. Knowledge distillation: A good teacher is patient and consistent, CVPR 2022 --- Rebuttal Comment 1.1: Comment: W2: I’m glad to see that transmittance helped improve the metrics! W4: If you have not already done so, I’d recommend adding this ablation in the appendix with appropriate links in the main paper. Overall, I’m satisfied and will keep my rating the same. --- Reply to Comment 1.1.1: Title: Response from the Authors Comment: Thank you for affirming your positive view of our paper.
Rebuttal 1: Rebuttal: **General Response: Rendering speed of Compressed 3D-GS is missing, and a fair comparison?** The reason for omitting the FPS of Compressed 3D-GS [34] in the main draft is because we reiterated the metrics from their original paper, which does not provide FPS metrics. We also respectfully point out that the original 3D-GS utilizes different downsampling ratios for different scenes (see full_eval.py in their GitHub repository) while the compared methods do not clarify their settings. Thus, for a fair comparison, we reran experiments for LightGaussian, Compact 3DGS [36], and Compressed 3DGS [34] on the same platform (NVIDIA A6000) using the default 3D-GS resolution configuration. We observed that our method outperforms Compact 3DGS and Compressed 3DGS in 4 out of all 5 metrics on the MipNeRF360 dataset while also running fastest on Tank and Temples datasets. Notably, our method significantly boosts the FPS from 144 (3D-GS) to 237, a 64% improvement. We also surpass Compact 3DGS by 53% and Compressed 3DGS by 55% in FPS. We provide visual comparisons among the adopted methods in the attached PDF. | Data: MipNeRF360 | FPS ↑ | Size ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------------------|-----------|----------|------------|-----------|------------| | 3D-GS (baseline) | 144 | 782.11 | 27.40 | 0.8192 | 0.217 | | Compact 3D-GS | 154 | 49.40 | 27.01 | 0.7986 | 0.243 | | Compressed 3D-GS | 152 | 28.63 | 27.03 | 0.8018 | 0.238 | | LightGaussian | 237 | 45.21 | 27.13 | 0.8066 | 0.237 | | Data: Tank and Temples | FPS ↑ | Size ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |---------------------------|----------|----------|------------|-----------|--------------| | 3D-GS (baseline) | 182 | 431.0 | 23.66 | 0.8445 | 0.178 | | Compact 3D-GS | 238 | 39.43 | 23.29 | 0.8285 | 0.202 | | Compressed 3D-GS| 202 | 17.68 | 23.54 | 0.8380 | 0.189 | | LightGaussian | 357 | 25.30 | 23.44 | 0.8318 | 0.202 | **General Response: Training vs Rendering Speed?** We report the training and inference efficiency based on the fair experimental setting on MipNeRF360 datasets. | Method | Rendering (FPS) | Train Time (minutes) | |-----------------------------|-----------------|-----------------------| | 3D-GS | 144 | 23.8 mins | | Compact 3D-GS | 154 | 33.7 mins | | 3D-GS + Compressed 3D-GS | 152 | 23.8 + 3.8 mins | | 3D-GS + LightGaussian | 237 | 23.8 + 9.0 mins | **General Response: Generalization to different methods?** LightGaussian can function as a plugin and is more general for various point-based representations. Specifically, we also validate LightGaussian on Scaffold-GS [1] to prune redundant neural Gaussians using our Visibility-aware Gaussian Pruning & Recovery. We observe that by pruning 80% of neural Gaussians, we can accelerate the rendering speed of Scaffold-GS from 152 to 173 FPS. Results are averaged on MipNeRF360 datasets. We reran the experiments on our platform for fair comparisons. | Methods/Metrics | FPS | PSNR ↑ | SSIM ↑ | LPIPS ↓ | |--------------------------------------|-----|--------|--------|---------| | 3D-GS | 144 | 27.40 | 0.8192 | 0.217 | | Scaffold-GS | 152 | 27.96 | 0.8240 | 0.2075 | | HAC | 167 | 27.76 | 0.8191 | 0.2198 | | Scaffold-GS + LightGaussian | 173 | 27.78 | 0.8187 | 0.2197 | **Reference** [1] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering, CVPR 2024. Pdf: /pdf/4982d0b3958316bc0ebdc1e2c2fa290a45733374.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures
Accept (poster)
Summary: The authors propose the MixEval to match the real-world human queries with existed benchmarks. MixEval is a two-stage benchmark reconstruction pipeline consisting of (1) wild query detection, and (2) grounding existing benchmarks in the mined queries. The authors match each crawled web user query with its most similar query in the benchmark pool and the corresponding ground truth answer to align benchmark queries with real-world queries. In order to improve the benchmark’s ability to distinguish strong models, the authors derive a challenging subset from MixEval, which is called MixEval-Hard. Strengths: 1. The authors match each crawled web user query with its most similar query in the benchmark pool and the corresponding ground truth answer to align with the human perferences. 2. The experimental results demonstrate that the MixEval and MixEval-Hard are highly aligned with Chatbot Arena, outperform singular benchmarks. Weaknesses: 1. In Section 3.2, the authors use dot product to match the query with the original benchmark, but do not explain how to use new queries in the test process and whether the queries are rewritten adaptively to match the ground truth answers. Why the answers from the original benchmark can be considered as the answers for the new queries? 2. In Figure 5, the description of the experimental setup is missing. It is not clear 0-shot or 5-shot is used in Figure 5, and how the inputs of "Mixed" and "Original" are formated for the models. 3. It is better to demonstrate the "User query" and "Ground truth-benchmark" processes are effective for constructing benchmarks through the ablation studies. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why can queries from different batches of the same distribution mitigate the contamination problem, instead of sampling queries from different distributions? 2. Why the performance of the Benchmark-level Mix in Figure 5 is lower than the average performance of the Mixed benchmarks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed detailed limitations in the Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding this work solid and effective! Below are our responses to the concerns: ## Concern 1 > In Section 3.2, the authors use dot product to match the query with the original benchmark, but do not explain how to use new queries in the test process and whether the queries are rewritten adaptively to match the ground truth answers. Why the answers from the original benchmark can be considered as the answers for the new queries? We thank the reviewer for the insightful comments! To clarify, after matching the web query with the original benchmark, **the original query will be dropped and we only use the questions and answers of the matched benchmark samples**. The benchmark mixture aims to map the real-world task distributions to the ground-truth-based benchmarks. We specified this in line 47-48, Figure 4 (graphic illustration), and Section 3.2 (notation illustration). We will carefully improve the related specifications in the later versions. ## Concern 2 > In Figure 5, the description of the experimental setup is missing. It is not clear 0-shot or 5-shot is used in Figure 5, and how the inputs of "Mixed" and "Original" are formatted for the models. We thank the reviewer for pointing that out! **The setting for Figure 5 is the same as other experiments**: we use 0-shot and official, unified model input formatting; the correlation numbers of the "original" benchmarks are the same as Figure 1 and 10, whose settings are specified in Section E. We will make it clearer in the next version. ## Concern 3 > It is better to demonstrate the "User query" and "Ground truth-benchmark" processes are effective for constructing benchmarks through the ablation studies. **We wish to highlight that we did the ablation study in Section 4.2–"MixEval outperforms both benchmark-level and uniform mixtures."** This ablation study aims to illustrate the effectiveness of the core step: benchmark mixture of MixEval. We compare the MixEval benchmark mixture with two other mixture schemes: Benchmark-level Mixture (ablating the sample-level mixture of MixEval) and Uniform Mixture (ablating both the sample-level and benchmark-level mixture of MixEval). Benchmark-level Mixture samples data points uniformly from each benchmark of the benchmark pool, with the benchmark size proportional to its split size in MixEval. In other words, its benchmark size distribution is the same as MixEval, while it's uniform inside each benchmark. The Uniform Mixture simply uniformly samples an equal number of data points from all benchmarks of the benchmark pool. Both methods yield significantly lower Arena Elo correlations than MixEval mixture, illustrating the effectiveness of our method. **Beyond that, we report the quantitative results of the whole query detection pipeline below (on our devised benchmarks) to illustrate the importance of the looped training.** Before being trained, a language model achieving high recall was chosen (Vicuna-33B), with 99.12% recall and 46.21% precision; The looped training (as described in Section 3.1) significantly improves the precision while maintaining the recall, illustrating the accuracy of the devised web query detection pipeline and the importance of the looped training. Table 2: The breakdown metrics of the web query detection pipeline. | Model | Param | Pipeline Recall | Pipeline Precision | Pipeline F1 | |-----------------|-----------------|--------------|-----------|-----------| | Web Detector (initial) | 33B | 99.12 | 46.21 | 63.03 | | Web Detector (trained) | 33B | 99.55 | 98.61 | 99.07 | Besides the above two ablations, the whole Section 4.2 has demonstrated the effectiveness of MixEval based on experimental results from multiple perspectives, which we believe has adequately illustrated the effectiveness. ## Concern 4 > Why can queries from different batches of the same distribution mitigate the contamination problem, instead of sampling queries from different distributions? We thank the reviewer for this insightful comment! The main reason is that there is a significant difference between every two batches, as illustrated in the Table 2 of the submitted paper. Note that beyond **batch web query update**, we will also perform **source web query update**, which updates all the web user queries with the latest Common Crawl splits and it can be interpreted as updating queries from different distributions. **We analyzed the dynamism and contamination of MixEval in detail in the general response to all reviewers and AC (on the top of this page). Hope that will help solve this concern!** ## Concern 5 > Why the performance of the Benchmark-level Mix in Figure 5 is lower than the average performance of the Mixed benchmarks? Because **benchmark-level mixture (Benchmark-level Mix)** is supposed to be worse than the **sample level mixture (Mixed)**. As we mentioned earlier, the "Benchmark-level Mix" is part of the ablation study, which samples data points uniformly from each benchmark of the benchmark pool, with the benchmark size proportional to its split size in MixEval. In other words, its benchmark size distribution is the same as MixEval, while it's uniform inside each benchmark. So such mixture scheme is supposed to perform worse than the "Mixed" benchmarks in Figure 5, which, as specified in line 238-239, performs the same sample-level mixture as MixEval. This performance gap arises from the fact that MixEval reconstructs real-world use cases more effectively than "Benchmark-level Mix". --- Rebuttal Comment 1.1: Title: Thank you for your response! Comment: Thanks for the response. Looking forward to your final version.
Summary: The paper reconstructs a new benchmark named MixEval by matching queries collected from the internet with existing benchmarks. This new benchmark aligns with the distribution of human preferences, reflecting the real distribution of queries on the internet. Additionally, considering the overlap and difficulty among different benchmarks, the paper introduces MixEval-Hard by recalculating model scores across various benchmarks. Both MixEval and MixEval-Hard can be updated quickly. Finally, the paper demonstrates that MixEval aligns better with Arena Elo compared to other benchmarks, and the mixing operation enhances the alignment of other benchmarks with Arena Elo. Strengths: 1. The experiments are comprehensive, providing substantial results that validate the proposed method's ability to align well with the Arena Elo benchmark. 2. By utilizing queries from the internet to shift the distribution of existing benchmarks to reflect real human preferences, the approach helps prevent large models from overfitting on existing benchmarks. Additionally, the paper offers insights into the consistency between benchmarks, indicating that alignment results are influenced not only by query distribution but also by question difficulty and density. 3. An interesting visualization method is proposed, effectively reflecting the main aspects evaluated by the existing benchmarks. Weaknesses: 1. Given that alignment with Arena Elo is used to measure the degree of alignment with human preferences, why not use data directly from Arena Elo when constructing MixEval, or even create a subset sampled from Arena Elo (considering Arena Elo directly reflects human preferences)? 2. To avoid models overfitting on existing benchmarks, the paper updates the benchmarks by drawing new queries from the internet. However, I believe overfitting occurs due to models overfitting on a fixed benchmark. Without changing the benchmark, the overfitting issue will not be resolved. Therefore, a method to determine if a benchmark is overfitted should be designed to add LLM benchmarks to the pool. Additionally, Table 2 does not demonstrate that this method alleviates overfitting. 3. Queries drawn from the internet might have already been seen by LLMs, so the retrieved samples from existing benchmarks might be simpler. The authors could exclude such factors during query extraction. 4. Some symbols are not explained, such as "j" in line 187, "tau" in the formula between lines 187 and 188, and "lambda" in line 189. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see above. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the comprehensive experiments, analysis, and insights of this work. Meanwhile, we understand the reviewer's concerns, which are also very important to us. Below we clarify: ## Concern 1 > Given that alignment with Arena Elo is used to measure the degree of alignment with human preferences, why not use data directly from Arena Elo when constructing MixEval, or even create a subset sampled from Arena Elo (considering Arena Elo directly reflects human preferences)? We thank the reviewer for raising this concern. We think there may exist some misunderstandings here. As a warm reminder, we illustrated the reasons of using web queries in Section A.2 (FAQs). Below we reorganize Section A.2 as a response to this concern: First of all, it's worth noting that we are just taking Chatbot Arena as a measure of the capable model ranking of MixEval, **while our approach and data have nothing to do with Chatbot Arena**–our real-world user queries are solely detected from the web and the benchmark pool consists of the existing ground-truth-based benchmarks. More importantly, **our goal isn't to fit Chatbot Arena's distribution; we are trying to fit real-world task distributions instead**. As shown in Section 2.2, line 127, **Chatbot Arena queries are slightly biased towards technical users**. However, web queries are grounded in the largest human population (5.4 billion internet users) among the accessible query sources and thus being more representative for the real-world users (illustrated Section 2.2). Moreover, **the web queries are multi-modal**, meaning that the queries on the web do not only comprise the text-to-text user queries, they also feature other modalities, such as image-to-text, text-to-audio, etc. Such queries are not available on Chatbot Arena or other chat platforms if they do not support models with the corresponding inputs/outputs. Our detected queries of other modalities provide a proxy of building an upgraded MixEval with any-to-any modalities, which facilitates a better evaluation suite for the whole AI community. However, without web queries, it's impractical to get well-distributed any-to-any real-world queries within the community. We will release the multi-modal version soon. ## Concern 2 > To avoid models overfitting on existing benchmarks, the paper updates the benchmarks by drawing new queries from the internet. However, I believe overfitting occurs due to models overfitting on a fixed benchmark. Without changing the benchmark, the overfitting issue will not be resolved. Therefore, a method to determine if a benchmark is overfitted should be designed to add LLM benchmarks to the pool. Additionally, Table 2 does not demonstrate that this method alleviates overfitting. We thank the reviewer for this insightful comment! **We analyzed the dynamism and contamination of MixEval in detail in the general response to all reviewers and AC (on the top of this page). Hope that will solve this concern!** As mentioned, although the contamination ratio of MixEval is comparatively low, we will indeed do benchmark contamination detection before putting it to the benchmark pool in the future versions of MixEval to further reduce the contamination. In addition, we will perform benchmark pool update to incorporate newly released ground-truth-based benchmarks to the benchmark pool, e.g., replacing MMLU with MMLU-pro, to mitigate contamination. ## Concern 3 > Queries drawn from the internet might have already been seen by LLMs, so the retrieved samples from existing benchmarks might be simpler. The authors could exclude such factors during query extraction. Thanks for pointing that out! We will carefully add a pre-processing step accordingly in the future MixEval releases. And at the same time, we illustrate below that the described situation has **negligible impact** to the model evaluation. This concern can be converted to "We should exclude the benchmark queries leaked to the web from the extracted web queries to avoid introducing bias in the later benchmark mixture stage" (This conversion requires some reasoning. Due to the character limitation, we do not expand here and would be happy to discuss later.) **We argue that it will have negligible impact to the model evaluation**. Below we do a rough estimation about the ratio of the leaked benchmark queries in our extracted web queries. The benchmark pool has a size of 10^5-10^6. Suppose the web sentences has a size of 10^12-10^13 entries [1] and it has a query ratio of 6%, i.e., in 100 web sentences there are 6 valid web queries, then the web query quantity would be around 10^11-10^12. Thus, the ratio of benchmark entries in the web queries will only be around 10^-6. **Containing such a low proportion of leaked benchmark queries in our web queries will introduce negligible bias to the later benchmark mixture stage, because the amount of web queries we finally sample for benchmark mixture is only around 10^3-10^4, meaning it’s highly possible that there won’t be any leaked benchmark query in the final sampled web queries**. Besides, in the previous illustration of contamination and dynamism (Table 1 of this rebuttal, in the general response), it is shown that MixEval is relatively less contaminated (~10%) compared with other popular benchmarks, which can also be interpreted as "MixEval samples are less likely to have already been seen by LLMs". [1] Xue, Fuzhao, et al. "To repeat or not to repeat: Insights from scaling llm under token-crisis." Advances in Neural Information Processing Systems 36 (2024). ## Concern 4 > Some symbols are not explained, such as "j" in line 187, "tau" in the formula between lines 187 and 188, and "lambda" in line 189. Thanks for pointing that out! These symbols are indeed under-specified, and we will carefully fix it in the next version. Here the j denotes the model's index; τ denotes the rejection threshold; λ denotes the scaling hyperparameter. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thank you for your response. The user's query is multimodal, so it is reasonable to use a mixed benchmark. My concern regarding the leakage issue has been partially resolved. Additionally, concerning Concern 3, although the queries in the benchmark are relatively rare on the entire web, it's possible that all queries on the web have been used as training data for the query-matching model. Therefore, I wonder if using these queries to match those in the benchmark might be more likely to match the polluted queries rather than with queries that the model has never seen. However, based on Table 1 in the final rebuttal response, it seems that the query matching does not introduce this bias. Thank you again for your response, and I will raise my score. --- Reply to Comment 1.1.1: Title: Thank you for raising score! Comment: Thank you for your insightful comments and appreciation. Your support is very important to MixEval!
Summary: The paper introduces MixEval, a new benchmarking framework designed to overcome the limitations of traditional benchmarks and LLM-as-judge methods for evaluating LLMs. By leveraging web-mined user queries and matching them with existing benchmark queries, MixEval aims to offer a fast, efficient, and dynamic evaluation method that aligns closely with real-world human preferences. This framework promises significant cost and time savings and shows a high correlation with human preference leaderboards like Chatbot Arena. Strengths: 1. This paper tackles the important evaluation problem timely. MixEval introduces a fresh approach to bridging the gap between real-world user queries and efficient evaluation. By using web-mined queries, it aims to better reflect actual user interactions. 2. The framework is efficient, could do dynamic updates and has high Correlation with Human Preferences. 3. The paper goes above and beyond by providing extensive analysis and comparison with other popular LLM benchmarks, giving valuable insights into the strengths and weaknesses of different evaluation methods. 4. The paper is nicely written, with figures and tables well-organized. In particular, Figure 2 stands out for its clarity and effectiveness in presenting complex information. Weaknesses: 1. Pipeline Brittleness: a. Web User Query Detection. The web user query detection phase has an overabundance of negative examples, which does little to help distinguish positive examples. b. Benchmark Mixture. Additionally, the sentence transformer used in the benchmark mixture phase has limited performance. Have you tried other sentence embedding models? Also it's unclear how many ground-truth LLM benchmarks can be accurately matched to mined queries. c. The error rates for each module and the accumulated error rate are not provided. 2. Single-Language Focus: The all-mpnet-base-v2 model used in the framework is designed for English only, raising concerns about its adaptability to different linguistic and cultural contexts. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you provide detailed qualitative and quantitive analysis of module Web User Query Detection and Benchmark Mixture? 2. It seems that the query of MixEval-Hard is longer than MixEval, is there any distribution difference between them? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing MixEval as a timely work to the community! We are also grateful for your acknowledgment of the novelty, efficiency, thoroughness, and clarity of our work. Below are our responses to the concerns: ## Concern 1 & 3 > Pipeline Brittleness: a. Web User Query Detection. The web user query detection phase has an overabundance of negative examples, which does little to help distinguish positive examples. b. Benchmark Mixture. Additionally, the sentence transformer used in the benchmark mixture phase has limited performance. Have you tried other sentence embedding models? Also it's unclear how many ground-truth LLM benchmarks can be accurately matched to mined queries. c. The error rates for each module and the accumulated error rate are not provided. Could you provide detailed qualitative and quantitive analysis of module Web User Query Detection and Benchmark Mixture? **Overabundance of negative examples** We controlled a balanced positive and negative samples for training by down-sampling the negative samples. The original specification may be a bit ambiguous, we will carefully fix this in the revised version later. **Weak retriever** It's worth noting that in our retrieval task, both the query and key are short sentences, as the queries are real-world user prompts and the keys are benchmark questions, therefore it does not require state-of-the-art embedding models. We tested with both all-mpnet-base-v2 and openai's text-embedding-ada-002, they showed negligible difference in our task (only 6 cases are significantly different over 500 retrievals). Besides, as shown in Figure 2, MixEval data points exhibit a perfect mapping from the original web queries (the lowest C-Dist), further illustrating its effectiveness. **Quantitative analysis of web query detection** We report the quantitative results of the whole detection pipeline below (on our devised benchmarks). Before being trained, a language model achieving high recall was chosen (Vicuna-33B), with 99.12% recall and 46.21% precision; The looped training (as described in Section 3.1) significantly improves the precision while maintaining the recall, illustrating the low error accumulation rate of the devised web query detection pipeline. Table 2: The breakdown metrics of the web query detection pipeline. | Model | Param | Pipeline Recall | Pipeline Precision | Pipeline F1 | |-----------------|-----------------|--------------|-----------|-----------| | Web Detector (initial) | 33B | 99.12 | 46.21 | 63.03 | | Web Detector (trained) | 33B | 99.55 | 98.61 | 99.07 | ## Concern 2 > Single-Language Focus: The all-mpnet-base-v2 model used in the framework is designed for English only, raising concerns about its adaptability to different linguistic and cultural contexts. We thank the reviewer for pointing this out! This is indeed a good point. Though MixEval and MixEval-Hard are English-dominant (>95% English), keeping the whole pipeline as multi-lingual is beneficial. **Note that MixEval is dynamic and can be updated with time. We will replace the current retriever with a capable multi-lingual one in the future MixEval releases to further improve the distribution.** ## Concern 4 > It seems that the query of MixEval-Hard is longer than MixEval, is there any distribution difference between them? Yes, they have some distribution differences, as they correspond to different difficulty levels. However, we have successfully controlled the distribution shift of MixEval-Hard with the rejection sampling mechanism introduced in Section 3.3. As shown in Figure 2, MixEval-Hard achieves a low C-Dist with the original web queries. The average length difference might arise from the fact that harder tasks tend to be longer in its average length, which aligns well with our commonsense.
Summary: This paper introduces MixEval, a new approach/benchmark to evaluate LLMs effectively in real-world scenarios. Traditional benchmarks often miss the comprehensiveness and subtlety of actual user queries, while existing methods like LLM-as-judge benchmarks are difficult to scale up. MixEval addresses these issues by using user queries mined from the web and aligning them with similar queries from established benchmarks. The key advantage of MixEval is its efficiency and dynamic nature. It achieves a high correlation with Chatbot Arena (0.96 Spearman correlation) but requires only 6% of the time and cost compared to mainstream benchmarks like MMLU. MixEval can also be updated quickly, significantly reducing the risk of benchmark contamination. The paper also introduces MixEval-Hard for better differentiation among strong models. Through comprehensive analysis, the authors demonstrate that MixEval offers accurate, less biased, and more dynamic evaluations, providing a potential scalable solution for real-world LLM assessment. Strengths: 1. Aligning web queries with mainstream benchmarks to simulate real-world user preferences is a novel approach. The authors cleverly transform the challenging task of evaluating open-ended queries into a benchmark mixture with groundtruth-based results, which is an interesting idea. 2. MixEval is an effective alternative to ChatBot Arena as it can scale up, dynamically update, and has lower costs. 3. The authors provide a comprehensive analysis of various LLMs on MixEval, demonstrating high correlations with ChatBot Arena. Weaknesses: **Major Issues**: 1. MixEval dynamically updates by mixing popular benchmarks (e.g., MMLU, BoolQ, GSM8K), which may not mitigate contamination. Most of these benchmarks are saturated and suffer from contamination. Although the authors claim they will "dynamically expand our benchmark pool with newly released benchmarks to further enhance the mixed benchmark distribution," this does not address the root issue of contamination. **Minor Issues**: 1. While the topic distributions in Figure 2 are impressive, most of these benchmarks were not designed to simulate human preferences, hence the skewed topic distributions may be intentional for evaluating specific tasks, domains, or capabilities. Therefore, the section title "LLM Benchmarks are Biased from Realistic User Queries and Preferences" seems biased, implying these benchmarks are meant to measure human preferences. 2. Another potential issue is the assumption that ChatBot Arena serves as the groundtruth for calculating correlations. As noted by the authors in lines 127-133, ChatBot Arena itself is biased. Therefore, even though MixEval has a larger user base, it might overfit to ChatBot Arena to some extent, making it more of an alternative rather than a more accurate evaluation of user preferences. Technical Quality: 4 Clarity: 4 Questions for Authors: I am a bit confused about Table 2. The authors briefly mention "periodically update" in the main text, but I did not fully understand the details of this update. Could you explain this further? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: In the appendix, the authors address some limitations in a Q&A format, such as the potential biases in MixEval. My views on limitations are reflected in the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding this work novel, effective, comprehensive, and solid! Below are our responses to the concerns: # Major Issue: ## Concern 1 > MixEval dynamically updates by mixing popular benchmarks (e.g., MMLU, BoolQ, GSM8K), which may not mitigate contamination. Most of these benchmarks are saturated and suffer from contamination. Although the authors claim they will "dynamically expand our benchmark pool with newly released benchmarks to further enhance the mixed benchmark distribution," this does not address the root issue of contamination. We thank the reviewer for this insightful comment! **We analyzed the dynamism and contamination of MixEval in detail in the general response to all reviewers and AC (on the top of this page). Hope that will solve this concern!** # Minor Issues: ## Concern 2 > While the topic distributions in Figure 2 are impressive, most of these benchmarks were not designed to simulate human preferences, hence the skewed topic distributions may be intentional for evaluating specific tasks, domains, or capabilities. Therefore, the section title "LLM Benchmarks are Biased from Realistic User Queries and Preferences" seems biased, implying these benchmarks are meant to measure human preferences. Thanks for pointing that out, this is an interesting topic to discuss! First of all, as indicated in the section title, what we care about are **real-world use cases** and **user preferences**, instead of solely the user preferences. "Real-world use cases" depicts the distribution of tasks, while "user preferences" depicts the grading process. **We believe the core principle of doing model evaluation is to evaluate them as they will be used in the real world.** As a result, the evaluations should be designed based on this principle. (We also illustrated this in Section A.1 "Why are real-world human queries and preferences important?") **However, most of the existing evaluations are developed in a way that is based on the interest of its creators instead of the real-world users**, e.g., GSM8K was created to measure models' mathematical reasoning abilities. Is mathematical reasoning something that we should measure? Yes, but we are not sure how important it is. Hence, Section 2 aims to measure the significance of existing evaluations by (1) compare their distributions with real-world use case distribution and (2) show their correlations with large-scale human preferences. "LLM Benchmarks are Biased from Realistic User Queries and Preferences" is the conclusion we got after rigorous analysis. It is important to note that Section 2 did not aim to completely criticize the existing evaluations, because they are still a good measure for specific tasks, domains, or capabilities. Section 2 is only serving as a meta-evaluation to the community: (1) it tells people that the results obtained from these evaluations may not generalize to real-world use cases, as they show a deviated task distribution and limited correlation with real-world use cases, (2) it visualizes the task distributions of different evaluations to help people select the correct benchmarks for the abilities they want to measure–e.g., it shows that you shouldn't take WinoGrande or DROP as a general-purpose benchmark, as their tasks only focus on a very small range of topics. ## Concern 3 > Another potential issue is the assumption that ChatBot Arena serves as the groundtruth for calculating correlations. As noted by the authors in lines 127-133, ChatBot Arena itself is biased. Therefore, even though MixEval has a larger user base, it might overfit to ChatBot Arena to some extent, making it more of an alternative rather than a more accurate evaluation of user preferences. We thank the reviewer for this insightful comment! As illustrated by the footnote of the first page, the Chatbot Arena leaderboard is indeed not the sole indicator of human preference, but it currently serves as the only gold standard large-scale human preference benchmark within the community. Therefore, we could not find a better proxy of real-world user preferences than Chatbot Arena to compared with. Because of this, we didn't claim that MixEval provides a better approximation of real-world human preference than Chatbot Arena (it might be better or not); instead, MixEval is providing an efficient and low-biased evaluation that can reflect real-world use cases. ## Concern 4 > I am a bit confused about Table 2. The authors briefly mention "periodically update" in the main text, but I did not fully understand the details of this update. Could you explain this further? Sure! As illustrated in lines 51-53 and Section 3.4 of the submitted paper, we update the data points of MixEval via (1) **batch web query update** (sampling different web queries batches from the crawled web queries), (2) **source web query update** (updating all the web queries with the latest Common Crawl) or (3) **benchmark pool update** (incorporating new ground-truth-based benchmarks to the benchmark pool). Since the mechanism of MixEval is to match web queries with benchmark pool samples, the above three updating methods refreshes both the web queries (the first and the second method) and benchmark pool samples (the third method). We will specify it more clearly in the next version. --- Rebuttal Comment 1.1: Title: Thank You for the Response! Comment: Thank you for your response and the additional experiments, which have led me to increase the score from 6 to 7. I agree that MixEval can mitigate contamination, but I still do not believe it fundamentally solves the problem, as it still heavily relies on existing benchmarks. I disagree with the authors' explanation for Concern 2, as many benchmarks were not originally designed to measure human preferences, and human preferences are just one aspect of the many evaluations for LLMs (although a very important one). Technical reports for all LLMs release results from numerous benchmarks to give the community a comprehensive understanding of their capabilities. Calculating correlations between all these benchmarks and human preferences introduces some potential biases, such as assuming they should align with human preferences, which is not necessarily the case. Despite these issues, which I consider minor, I personally recommend the acceptance of this paper. Good luck! --- Reply to Comment 1.1.1: Title: Thank you for your appreciation! Comment: Thank you once again for recognizing MixEval and for your insightful comments. We will incorporate the feedback from this rebuttal to enhance future releases and revisions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback. We identify that the main concern among reviewers is about the contamination of MixEval. **Therefore, we conduct additional contamination analysis and provide the general response here.** # High-Level Takeaways 1. **Low Natural Contamination**: MixEval and MixEval-Hard demonstrate a low natural contamination ratio, as detailed in the Table 1 of this rebuttal (below). 2. **Purpose of Model Evaluations**: Model evaluations typically serve two main purposes: **developer self-validation** and **leaderboard competition**. For self-validation, contamination of MixEval is not a concern. For leaderboard competition, MixEval effectively mitigates contamination through **batch web query updates**, **source web query updates**, and **benchmark pool updates**, demonstrating its high dynamism compared with traditional ground-truth benchmarks. 3. **Broader Contributions**: While dynamism is a key feature of MixEval, the contributions of this work extend significantly beyond this aspect, as elaborated in lines 60-73 of the submitted paper. # Detailed Justification When does contamination affect model evaluations? Generally, model evaluations serve two purposes: **developer self-validation** and **leaderboard competition** [1]. ## Self-Validation For self-validation, MixEval is effective regardless of contamination levels, as **developers aim to exclude evaluation data from training data to ensure accurate testing**. At this stage, they will not deliberately overfit the evals. In addition, they usually employ contamination detection to remove leaked evaluation data from the training data, enhancing the generalizability of the internal evaluation pipeline. MixEval, being efficient and low-biased, accurately mirrors real-world use cases, making it ideal for rapid model iteration. ## Leaderboard Competition Conversely, in leaderboard competitions, some contamination is inevitable due to the static nature of ground-truth-based benchmarks. **Developers may not rigorously exclude evaluation data from training, or may even include it intentionally to improve leaderboard rankings**. Below, we demonstrate that MixEval can mitigate both natural and deliberate contamination.. **We wish to highlight that MixEval is only mitigating the contamination instead of solving it completely, as mentioned on line 50, 207, and Figure 4 of the submitted paper.** ### Natural Contamination **For natural contamination, MixEval mitigates it via benchmark mixture.** According to Table 1 of [2], contamination levels of existing web benchmarks range from 1.1% to 40.6%. Generally, more popular benchmarks exhibit higher contamination. For example, MMLU shows a relatively high contamination ratio (24.3%), yet remains crucial to the community and our benchmark pool. MixEval addresses this by mixing popular benchmarks with less contaminated ones smartly (e.g., CommonsenseQA), thus reducing the natural contamination ratio. Note that MixEval can be updated with time, and we will include contamination detection in future releases to further minimize contamination. We use the contamination detector in [2] to detect natural contaminations of benchmarks by searching the ratio of benchmark data existing on the web. As shown in the table below, MixEval and MixEval-Hard exhibit a lower contamination ratio compared with popular benchmarks such as MMLU due to the smoothing effect mentioned above. Overall, MixEval achieves the highest correlation with real-world use cases while staying at a low natural contamination ratio and high efficiency. Table 1: Contamination ratio of different benchmarks. | Dataset | Split | #Total | #Input-only Contamination | #Input-and-label Contamination | |----------------|-------|--------|---------------------------|-------------------------------| | ARC_c | Test | 1172 | 53 (4.5%) | 283 (24.1%) | | CommonsenseQA | Dev | 1221 | 3 (0.2%) | 17 (1.4%) | | Winogrande | Dev | 1267 | 0 (0.0%) | 14 (1.1%) | | C-Eval | Dev | 1346 | 69 (5.1%) | 547 (40.6%) | | Hellaswag | Dev | 10042 | 46 (0.5%) | 1201 (12.0%) | | MMLU | Test | 13987 | 678 (4.8%) | 3399 (24.3%) | | **MixEval** | Test | 4000 | 129 (3.2%) | 414 (10.4%) | | **MixEval-Hard** | Test | 1000 | 39 (3.9%) | 106 (10.6%) | ### Deliberate Contamination **For deliberate contamination, MixEval mitigates it by dynamically updating web user queries and the benchmark pool.** If model developers deliberately overfit evals, contamination is nearly impossible to fully eliminate. Even with dynamic systems like Chatbot Arena, evaluations can still be hacked, e.g., fitting on LMSys user data or hiring biased workers. Developers may hack MixEval by **(1) directly fitting on MixEval data**, or **(2) fitting the benchmark pool**. We address method (1) by periodically updating MixEval data points through "**batch web query update**" (sampling new web query batches from the crawled web query pool) or "**source web query update**" (updating the whole web query pool with the latest Common Crawl), and then perform benchmark mixture. Table 2 of the submitted paper shows it effectiveness, demonstrating significant differences between MixEval versions. Method (2) is tackled by "**benchmark pool update**", incorporating new ground-truth benchmarks in the community, e.g., replacing MMLU with MMLU-pro. We will add this discussion to Section A as another FAQ to strengthen people's understanding regarding the dynamism of MixEval. Reference: [1] Fourrier, Clémentine. "Let's Talk About LLM Evaluation." [2] Li, Yucheng. "An open source data contamination report for llama series models.".
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers
Accept (poster)
Summary: This study investigates the influence of architecture on pre-trained language models' base capabilities. It reveals that the contribution ratio of Multi-Head Attention to pre-trained language modeling affects base capabilities. FFN-Wider Transformers reduce this ratio, leading to a decline in base capabilities. The researchers propose a Combination Enhanced Architecture (CEA) to address this issue. They also extend this to Mixture of Experts (MoE) transformers, achieving significant improvements in base capabilities. Strengths: + The paper shifts the focus from the commonly studied impact of scale on pre-trained language models to the impact of architecture. + The exploration of the influence of using wider FFN layer is interesting, and should impact further researches. + The proposal of CEA as a solution to the decline in base capabilities shows a proactive approach to solving the identified problem. + The findings are confirmed through extensive experiments, adding credibility to the claims. Weaknesses: - The description of CEA is not detailed enough. Technical Quality: 3 Clarity: 3 Questions for Authors: In general, I think the discovery of the paper is interesting, and I don't have many questions. Just some minor issues: - What is the definition of mutual information? Give some introduction. - How is the pre-training performance of the models? - How is the structure of CEA? It is not clear to me after reading Section 6. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address it in Section 8. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review and valuable feedback! We answer your questions below. *** **Q1:** How is the pre-training performance of the models? **A1:** For the BERT and GPT experiments, the results are primarily presented in Tables 1 and 2, with a textual description of the pre-training performance achieved by each model group. For the MoE model experiments, **we have included in the attached PDF (in the global response above) pre-training performance curves similar to Figure 1(c).** These curves allow for a comparison between the original MoE model and the improved MoE model, which we hope addresses your concerns. *** **Q2:** What is the definition of mutual information? Give some introduction. **A2:** We apologize for the oversight. The absence of the definition of mutual information might cause confusion for researchers outside this field. We will follow your suggestion and add the relevant definition and explanation of mutual information. *** **Q3:** How is the structure of CEA? It is not clear to me after reading Section 6. **A3:** We apologize for any confusion caused, and we would like to provide further clarification. In Section 5, we introduce the Combination Adjustable Architecture (CAA), which is an architecture designed for analytical purposes. This architecture primarily involves splitting the original FFN in the FFN-Wider model according to a certain width ratio to obtain Outer-FFN and Inner-FFN. We then adjust the width ratio between these two components to support our arguments. At this stage, we pre-train multiple models with varying width ratios on a small scale for analysis. In Section 6, we introduce the Combination Enhanced Architecture (CEA), which is similar to the CAA mentioned above. However, since CEA is an improved architecture rather than an analytical one, it requires a fixed width ratio to be determined before conducting large-scale pre-training experiments. For different models, we determined various width ratios and then performed large-scale pre-training and downstream experiments. *** --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! It addresses my concerns and I will keep my rating.
Summary: The paper studies the contribution raion of FFN and MHA layer in transformers and its effect on out-of-distribution performance. It finds that the wider FFN layer will increase its contribution ratio and lower the OOD performance. Lastly, the paper proposes a new architecture that moves part of the FFN layer to MHA and shows the CEA can improve the baseline models' OOD ability. Strengths: - The paper analyzes the contribution ratio with new methods that evaluate Mutual Information and token prediction accuracy of each layer's output. - The proposed CEA method has improved the MoE transformer in various tasks and datasets with the same parameter scale, showing its effectiveness in increasing OOD ability. Weaknesses: - The paper does not give a convincing explanation of why the study aligns pre-trained performance with different parameter scales. A larger scale model may not fully converge when it has a similar training loss to a smaller model. It is also not known whether the proposed CEA will harm in-domain performance. - The MHA module in the vanilla transformer also has linear layers for Q, K, V, and after self-attention. I think they can be seen as inner-FFNs that will transform the combination function and make it not necessary to introduce an extra inner-FFN layer. - Tables 1 and 2 show that BERT w/ CEA performs worse than vanilla in GLUE and SGLUE and GPT w/ CEA perform worse in all datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses 1 and 2. Others: - Figure 6 lacks a legend to explain the figure elements, including the blue line, orange bar, blue and orange dotted lines. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review and valuable feedback! We answer your questions below. *** **Q1:** The paper does not give a convincing explanation of why the study aligns pre-trained performance with different parameter scales. A larger scale model may not fully converge when it has a similar training loss to a smaller model. It is also not known whether the proposed CEA will harm in-domain performance. **A1:** You mentioned why we adopted the alignment of pre-training performance for models with different parameter scales. We would like to explain this further. We mainly consider that comparing models with different parameter scales is often unavoidable (even outside the scope of this paper). Therefore, finding a relatively reasonable basis for cross-parameter scale comparison is necessary. **This method needs to eliminate the interference of parameter advantages, computational advantages, etc., and maximally reflect the base capability differences brought by the architecture itself.** In our paper, we explained that aligning pre-training performance might be more suitable for this goal (Section 2.2), while other methods might not be appropriate (Appendix A). You also mentioned the potential issue of inadequate convergence in large parameter models, and you might be concerned about the inconsistency in the behavior of models during the later stages of training. While this factor cannot be entirely ruled out, we have not yet explored this topic deeply in our current analysis. **However, one thing is certain:** in our work, the comparison between large parameter models and their improved versions does not face this issue, as they have the same parameters, and there should not be significant differences in the degree of convergence. Regarding your concern about whether the new architecture would harm in-domain performance, as shown in Figure 6, most width ratios do not pose any problems. Only extreme width ratios might harm pre-training performance. Our experiments have demonstrated that choosing non-extreme width ratios can still achieve performance gains (Appendix I). For MoE models, **we have plotted pre-training performance curves similar to Figure 1(c) in the attached PDF (in the global response above).** It can be seen that the improved MoE models do not experience a degradation in pre-training performance, which essentially indicates the usability of the new architecture. *** **Q2:** The MHA module in the vanilla transformer also has linear layers for Q, K, V, and after self-attention. I think they can be seen as inner-FFNs that will transform the combination function and make it not necessary to introduce an extra inner-FFN layer. **A2:** You mentioned that there are also linear layers within the MHA module, which can be considered as inner-FFN layers, making additional inner-FFN layers potentially unnecessary. We had considered this issue as well, but we found that the linear layers in the MHA module lack non-linear transformations, which directly results in their transformation capability being inferior to that of the FFN layers. Therefore, considering the linear layers in the MHA as inner-FFN might still face the issue of the MHA contribution ratio as discussed in our paper. Ultimately, we chose to convert a part of the FFN into inner-FFN. *** **Q3:** Tables 1 and 2 show that BERT w/ CEA performs worse than vanilla in GLUE and SGLUE and GPT w/ CEA perform worse in all datasets. **A3:** Your observations are correct, and they are reasonable within the context of our work. Firstly, since the models in Tables 1 and 2 are aligned based on pre-training performance, the FFN-Wider model cannot outperform the vanilla model solely based on parameter size. Therefore, it is expected that the vanilla model performs better on downstream tasks, and it is normal for the FFN-Wider model to be inferior to the vanilla model. Secondly, **the fact that FFN-Wider w/ CEA is also inferior to the vanilla model is within our expectations.** The main reason is that we used the FFN-Wider model primarily as a good subject for analysis. Improving it was only to demonstrate the effectiveness of our analysis, and ultimately, the improvements are intended to benefit the MoE model. In fact, the FFN-Wider model does not have efficiency advantages over the vanilla model. Improving it is less practical than directly using an enlarged vanilla model. Thus, even if the FFN-Wider w/ CEA were to surpass the vanilla model, it would not have practical significance. Therefore, we ultimately focus on demonstrating practical utility through the MoE model. *** **Other Responses:** We apologize for any inconvenience caused by the figures in the paper. We will make improvements in subsequent versions to enhance the clarity and comprehensiveness of the related expressions in the paper. *** --- Rebuttal Comment 1.1: Comment: Thanks for the response, it addresses most of my concerns. I still have some concerns about the results of BERT/GPT w/ CEA. I understand that the FFN-Wider performs worse than the vanilla model, but I think w/ CEA should at least achieve similar results to the vanilla model. Otherwise, there is no reason to use CEA rather than vanilla. Maybe authors can show that the CEA can outperform vanilla with the same parameter scale and not align pre-training performance. However, the results of MoE are convincing. I think this is a good paper and will raise my rating.
Summary: This paper examines how the architecture of a transformer model influences its base capabilities, such as out-of-distribution tasks, transfer learning, and few-shot learning. Specifically, it explores the effects of replacing the feed-forward network (FFN) with a wider FFN (FFN-wide) in various parts of the architecture. Initially, the authors find that replacing the FFN (referred to as the transformation function) after the attention layer (referred to as the combination function) with FFN-wide leads to worse performance. They analyze this impact by measuring the contribution of the combination function using techniques such as mutual information and token prediction. The results indicate that performance deteriorates in most cases when the contribution of the combination function decreases. The authors then devise multiple architectures with different width ratios of FFN-wide in the combination and transformation functions. They select the best ratio model architecture, termed CEA (Combination-Enhanced Architecture). This CEA architecture is subsequently used in a Mixture of Experts (MoE) model, demonstrating improvements over the non-CEA MoE model. Strengths: - The paper addresses an interesting problem by measuring the impact of transformer architecture on the downstream performance of the model, which is crucial for optimizing and understanding transformer models. - Attributing performance to the inner workings of a neural network, particularly the contribution of specific components, and analyzing this contribution using techniques like mutual information is both novel and insightful. - It is also interesting to see the impact on two different kinds of model like Bert and GPT and see how the optimal width for the two types of models are different. - The application of CEA to a mixture of experts models definitely demonstrates the practical benefits of this paper. Weaknesses: - The performance improvements for the different variations of the architectures seem minor, and the absence of standard deviations makes it difficult to assess the robustness of the results. - For both BERT and GPT, even after a thorough architecture search, the performance is at best similar to the vanilla models, which is not very promising. - Similarly for the MoE experiment, it will useful to add the standard deviation of the results. Additionally, since the training loss is lower for MoE with CEA compared to vanilla MoE, it is hard to determine whether the improvements are due to the architecture's inductive bias or just the lower training loss. Similar to the rest of the paper, please also report numbers for this experiments by keeps the same training performance level. Overall, the paper is interesting, but the performance values being so close and the lack of variance measurement do not give me much confidence in the results. Minor: In Table 1 and Table 2, the results for the proposed approach are bolded rather than the best approach for that experiment. In most figures, the y-axis does not start at 0, which can give readers an inaccurate representation of the data. Technical Quality: 2 Clarity: 2 Questions for Authors: - What does it mean for the width ratio to be 0% and 100%? Does this imply that the FFN is removed in these cases? - What does it mean that FFN-wider models with CEA are not able to beat vanilla models in some cases in Table 1 and Table 2? Isn't the vanilla model one of the ratios explored? If not, would it make more sense to explore the architectures on some other dimensions? - Please indicate whether lower or higher values are better for all the tables and figures in the paper. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors can also address the limitations of exploring only a narrow subset of architectures and discuss whether any parts of the proposed methodology can be extended to other aspects of transformer architectures. Additionally, all experiments were performed on smaller-scale models, so the results may not generalize to larger models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review and valuable feedback! We answer your questions below. *** **Q1:** since the training loss is lower for MoE with CEA compared to vanilla MoE, it is hard to determine whether the improvements are due to the architecture's inductive bias or just the lower training loss ... please also report numbers for this experiments by keeps the same training performance level. **A1:** **Our MoE experiments still utilized a similar setting: ensuring that the pre-training performance of the original MoE and the improved MoE were roughly consistent.** We suspect that some results in Table 3 or Figure 1(c) may have caused the misunderstanding, so we would like to clarify them here. In Table 3, only "SlimPajama-CC&C4 (Loss)" represents pre-training performance, as the models in Table 3 were pre-trained on the CC and C4 subsets of the SlimPajama dataset. Because the SlimPajama dataset achieved de-duplication across subsets, the remaining results for SlimPajama belong to OOD tests. The performance curves in Figure 1(c) are similar, only representing OOD performance. Another concern might be that even if "SlimPajama-CC&C4 (Loss)" in Table 3 represents pre-training performance, the improved MoE (2.303) still seems to perform better than the original MoE (2.315). Is this advantage small enough? To illustrate the issue clearly, **we have included a PDF attachment (in the global response above)** with pre-training performance curves similar to Figure 1(c) and compared side-by-side with the original Figure 1(c). **It can be seen that:** 1) The gap in pre-training performance is negligible compared to the OOD performance gap; 2) During pre-training, there were moments when the pre-training performance of both models was identical, yet the OOD performance gap remained significant. Therefore, we believe this situation still largely aligns with our claimed setup. *** **Q2:** For both BERT and GPT, even after a thorough architecture search, the performance is at best similar to the vanilla models, which is not very promising. **A2:** For the FFN-Wider model, it is indeed the case that it does not offer efficiency advantages over the vanilla model. Improving the FFN-Wider model is less beneficial than simply using a scaled-up vanilla model. In fact, we did not emphasize its practical value but rather used it as a good analytical object to conduct some meaningful analysis. The key point is that we eventually extended the relevant analysis to the MoE model. The MoE model is a practical architecture that offers the advantage of expanding model capacity with lower computational costs compared to the vanilla model. With the same amount of computation, the MoE model performs better in pre-training, making it worthwhile to improve the MoE model. *** **Q3:** What does it mean that FFN-wider models with CEA are not able to beat vanilla models in some cases in Table 1 and Table 2? Isn't the vanilla model one of the ratios explored? ... **A3:** The response to A2 should also apply to this question. Additionally, it is necessary to clarify that the FFN-Wider model has a wider FFN, and even after adjusting width ratios, it cannot be equivalent to the vanilla model. *** **Q4:** What does it mean for the width ratio to be 0% and 100%? ... **A4:** Generally, adjusting the width ratio results in the presence of two FFNs. However, when the ratio is 0% or 100%, one of the FFNs is removed, leaving only a single FFN. *** **Q5:** Please indicate whether lower or higher values are better for all the tables and figures in the paper. **A5:** Our explanation is as follows: **Figure 1(b):** The orange line represents a metric, where higher values are better. **Figure 1(c):** Lower values are better. **Figure 2:** Higher values are better. **Figures 3 and 4:** No metrics are present. **Figure 6:** The blue line represents a metric, where higher values are better. **Tables 1 and 2:** Higher values are better. **Table 3:** Metrics labeled with "Loss" and "PPL" are better when lower, while metrics labeled with "Acc." are better when higher. The tables in the appendix provide detailed information, with the metrics having similar meanings. *** **Q6:** The performance improvements for the different variations of the architectures seem minor, and the absence of standard deviations makes it difficult to assess the robustness of the results ... **A6:** We would like to address this from two perspectives. **On one hand,** similar to A2 above, we do not actually consider the FFN-Wider model as a replacement for the vanilla model. The FFN-Wider model does not have practical competitiveness; its performance improvement is mainly to validate our analysis. Therefore, the extent of the improvement may not be particularly critical, and our main focus is to show the performance of the MoE model. **On the other hand,** given that there were no changes in scale or data, only slight architectural modifications, the extent of this performance improvement seems reasonable. Although it has limited practical significance, it generally supports our analysis. As for the robustness, on one hand, the results in Tables 1 and 2 are averaged performance across multiple tasks, which provides some degree of robustness. On the other hand, **we have supplemented the results with standard deviations in the PDF attachment (in the global response above)**, also indicating that the results are relatively robust. The tables in the PDF attachment only list the results involving multiple rounds of experiments. Specifically, the results in Table 1 are from 8 rounds (H=128) and 4 rounds (H=768) of experiments, Table 2 from 10 rounds, and Table 3 from 5 rounds. *** **Other Responses:** We apologize for any inconvenience caused by the tables and figures. We will make improvements in subsequent versions to enhance the clarity and comprehensiveness of the related expressions. *** --- Rebuttal 2: Comment: Thank you very much for reviewing our work! We have addressed some of the concerns you raised in our rebuttal. Specifically, for the issues you were particularly concerned about, such as the fairness of the MoE experiment comparisons and the standard deviation of the experimental results, we have provided more detailed results (in the global response PDF above). We strongly believe that these additional results will effectively address your concerns, especially regarding the fairness of the MoE experiment comparisons, which seems to have led to some misunderstanding. If time permits, we kindly ask you to review our rebuttal. We look forward to your feedback. Thank you!
Summary: The paper examines the influence of architecture on the base capabilities (OOD, transfer learning, few-short learning) of large language models. The main focus is on FFN-Wider transformers and understanding why they have poorer base capabilities compared to vanilla transformers. The contibution ratio of multihead attention to pretrained language modeling is found to be a key factor affective base capabilities. Based on this observation, a Combination Enhanced Architecture is proposed and also extended to mixture of experts transformers. The analytical study is backed by experimental evaluation successfully achieving significant improvements in base capabilities of the 14B parameter MoE model. Strengths: + While most existing study of large models has focussed on the impact of scale, this work is a significant effort in understanding the influence of architecture. + The empirical findings are supported by a more deeper interpretation of the underlying mechanisms of these influences. + The proposed CAA wherein the wider FFN is split into adjustable two parts - outer-FFN which stays as a transformation function, and an inner-FFN which is relocated within the MHA layer for enhancing the combination function is interesting in its own right. + The extension to MoE transformers and the experimental demonstration with 14B parameter GPT architecture MoE model is very interesting. Weaknesses: - Some of the findings appear to be obvious (perhaps, in hindsight) - as the contribution ratio of the MHA layer increases, the base capabilities also improve. It would have been interesting to also find some non-intuitive influences. - Overall the paper's analysis is much narrow in scope than the title of the paper. Nonetheless, it is interesting and useful study. - The language and presentation of the paper can be improved. It will be useful to have the grammar issues fixed in the paper before the final version. Technical Quality: 3 Clarity: 2 Questions for Authors: - Some choices of parameters such as chosing intermediate dimension to 32d instead of say 8 or 16d can be better explained. - Was the width adjustment limited to Outer-FFN? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The reviewer does not except any negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review and valuable feedback! We answer your questions below. *** **Q1:** Some choices of parameters such as chosing intermediate dimension to 32d instead of say 8 or 16d can be better explained. **A1:** We agree with your opinion that attempting more width would enhance the persuasiveness of the work. In fact, the choice of 32d in the paper was not made with any special consideration; it was simply based on an assessment of computational resources. In future work, we will consider enriching this aspect to strengthen the persuasiveness of the work. *** **Q2:** Was the width adjustment limited to Outer-FFN? **A2:** Adjusting the width ratio is mainly aimed at the original FFN in the FFN-Wider model, which will be split into Outer-FFN and Inner-FFN. The sum of their widths is equal to the original FFN, meaning an increase in the width of one will result in a decrease in the width of the other. Therefore, the width adjustment should occur simultaneously in both the Outer-FFN and the Inner-FFN. *** **Other Responses:** We sincerely apologize for any inconvenience caused by the language and expression issues in the paper. We will improve and refine the relevant expressions in the subsequent versions of the paper. *** --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for addressing my concerns. Having carefully read all other reviews and your responses, I will keep my positive score.
Rebuttal 1: Rebuttal: Here is a PDF attachment containing the figures and tables referenced in the detailed responses below. Pdf: /pdf/55d9011d221d8b62d147838c959f2a2348f24ab4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Alignment for Honesty
Accept (poster)
Summary: This paper focuses on the task of honesty alignment. The authors first explore the task formulation of the alignment for honesty problem, then develop a series of evaluation metrics based on the change of response type to qualify the honesty of a model. They then propose a collection of training methods to improve the honesty of models. Experiments demonstrate the effectiveness of the proposed methods. Strengths: 1. The problem addressed in this paper is an important research topic. The writing quality of this paper is very good, and the authors define the honesty alignment problem with very clear logic, which is precisely what the community lacks. 2. In addition to defining the problem, the methods proposed by the authors can effectively improve the honesty of the model. The description of the methods is also clear and concise. Weaknesses: 1. Although the methods are only part of this paper, the proposed methods are all heavily based on human heuristic, such as "learning to express uncertainty might be useful." The novelty is relatively limited today, and some methods also rely on threshold selection. 2. The comparability at the method level is slightly lacking, and there is insufficient analysis of why the methods work. For example, 1) The number of training samples for the three methods is not consistent, and it cannot be ruled out that multiple samples perform better because they trained the correct answer more times. 2) There is insufficient explanation as to why adding a confidence score is better than just training the correct answer. Intuitively from the prompt, such training does not directly make the model learn to say "I don't know." 3. The paper lacks comparison with several important baselines, for example, representation engineering [1]. [1] Representation Engineering: A Top-Down Approach to AI Transparency Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Among the three aspects of HHH, the authors discussed the impact of Honesty improvement on Helpfulness. Can a small experiment be added to verify the impact of increased honesty on harmlessness? 2. Can the authors analyze the accuracy of the uncertainty expressions that the final model has learned? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: This paper does not include a Limitations & Social Impacts section. We hope the authors can add this to comply with the NIPS Checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Weakness 1 Thank you for your feedback. As you kindly pointed out, the supervised fine-tuning methods are indeed only part of this paper. The primary contribution of our work is the development of a comprehensive and feasible framework for "alignment for honesty": this includes establishing a precise problem definition, proposing effective evaluation metrics, and also demonstrating the effectiveness of several straightforward training methods. We hope our first step on alignment for honesty can inspire more excellent future work. # Response to Weakness 2 Thank you for your valuable comment. We would like to take this opportunity to provide some clarifications regarding our proposed methods. (1) Regarding > ..., it cannot be ruled out that multiple samples perform better because they trained the correct answer more times. We would like to kindly clarify that the Multisample method does not perform better in terms of "Accuracy" although it "trained the correct answer more times", as shown in Tab. 3. However, we would like to note that there is an improvement in the honesty score, and we are more than willing to explain why it works for "Honesty": By implicitly learning from the proportions of correct answers and idk responses among the m sampled responses in the expanded training data for Multisample, the model can better recognize its knowledge boundaries in a fine-grained manner, leading to the improved honesty score. (2) Regarding > ... Intuitively from the prompt, such training does not directly make the model learn to say "I don't know." First, we would like to respectfully remind you that the training data for both the Absolute and Confidence methods includes an equal number of "unknown questions" (as shown in Eq. 9) where the output is an idk response. This means that in both methods, the model has the same opportunity to *directly* learn to say "I don't know". Additionally, compared to Absolute, Confidence provides prefixed confidence expressions for "known questions". These prefixes serve as more fine-grained supervision signals, enabling the model to implicitly learn to more precisely capture its knowledge boundaries. The experimental results presented in Tab. 3 and 4 demonstrate that these additional hints are indeed beneficial for improving the honesty score, as we discuss in detail in Lines 256-262 of our paper. # Response to Weakness 3 We sincerely appreciate you bringing this valuable reference to our attention. However, we would like to kindly clarify that [1] has a different purpose from our work. Section 4.3.3 of [1] introduces adding honesty reading vectors to elicit truthful answers to *known* questions, which corresponds to the concept of "eliciting latent knowledge" as we discuss in Lines 149-154. In contrast, our study aims to adjust the model's behavior for both *known and unknown* questions. We will address this distinction in our related work section in the revised version. Once again, thank you for sharing this reference, and we look forward to incorporating exploration at the representation level in our future work. [1] Representation Engineering: A Top-Down Approach to AI Transparency # Response to Question 1 Thank you for your thoughtful question. We would like to explain that the process of aligning the model with honesty does not explicitly introduce any instructions that would compromise safety, so that we did not conduct experiments on how increased honesty impacts harmlessness. However, we are more than willing to exploring this further through empirical research. Specifically, we utilize the 700 test prompts from BeaverTails-Evaluation that can potentially elicit harmful responses, and employ GPT-4o to assess whether the model responses are safe, unsafe, or controversial (in terms of safety). As shown in Table 1 in the global response, honesty-oriented supervised fine-tuning has almost no impact on the model's inherent harmlessness. # Response to Question 2 Thank you for your insightful question. We are very pleased to analyze the relationship between confidence expressions and accuracy of the model trained using Confidence-Verb. Specifically, we bin M_t+1's responses (the model after alignment using Confidence-Verb) by its expressed confidence and measure the average accuracy of responses in each confidence bin. From Figure 1 in the global response, we observe that: 1. In areas of high confidence, accuracy is slightly lower than its corresponding confidence, reflecting the model's over-confidence, which is consistent with related work [2]. 2. In areas of low confidence, particularly at confidence=0.2, accuracy is actually higher than its corresponding confidence. This can be explained by the fact that the training data for Confidence-Verb pairs "confidence expression" with "*correct* response", encouraging the model to **prioritize correct answers over calibrated confidence.** This suggests a potential issue for future work on calibration: how to balance calibration and performance. We acknowledge that our Confidence method, while leading the model to respond more prudently, has potentially limited effects on calibration due to the nature of our constructed training data. We encourage future work to handle known questions in a more fine-grained manner (as stated in Lines 42-44) and strike a balance between calibration and performance. If we have misinterpreted any aspect of your questions, or if you have suggestions for more targeted experiments, please feel free to tell us. [2] Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback # Response to Limitations Thank you for your reminder. We would like to clarify that we have included a Limitations section in Section 5. Additionally, as outlined in 979-985, we have explained why this paper does not address societal impact. However, we can reorganize a Limitations & Social Impacts section. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you to the authors. I have carefully read the authors' rebuttal. This clarifies some of my misunderstandings about the IDK training samples, and I appreciate that the authors conducted experiments addressing the questions I raised. I'm pleased to see the additional results. Some additional comments: As the authors addressed, in the PDF they uploaded, the calibration error is relatively high. This raises concerns about the actual constraining effect of the uncertainty score. The method's functioning may not align with its intended design of "making the model aware of its level of certainty." It's possible that it merely increases the computational load during inference, allowing the model more steps to better infer the result before attempting an answer. Admittedly, this method is a minor contribution in the paper and is unlikely to significantly affect the overall score; I have also carefully read the relevant experimental analysis section, but it does not fully address this concern. If the authors could provide a more robust analysis demonstrating how the model trained with the uncertainty score prefix specifically influences model behavior, or even internal representations, it would be beneficial in clarifying the method's actual impact and mechanism. --- Reply to Comment 1.1.1: Comment: Thank you very much for your insightful and important question. As we previously discussed in our Response to Weakness 2, the only difference between the Absolute and Confidence-Verb methods is the addition of a confidence prefix (i.e., uncertainty score prefix) to known questions. We would like to provide a detailed analysis to explain the advantages of Confidence-Verb over Absolute. (1) Regarding honesty-related scores: In our Response to Weakness 2, we emphasized that **the confidence prefix "enables the model to implicitly learn to more precisely capture its knowledge boundaries".** To substantiate this claim, we first calculate the *expected accuracy* of the unaligned model on the TriviaQA evaluation dataset, and then examine the distribution of *idk samples* after aligning the model using both Absolute and Confidence-Verb: - *Expected accuracy* is defined as to the ratio of correct responses among m sampled responses (m=10 and temperature=1.0), as specified in Lines 183-184. - *Idk samples* refer to evaluation samples where the aligned model replies with an idk response. The results are presented in the table below. The first row represents the unaligned model's expected accuracy. The second and third rows display the distribution of idk samples across different expected accuracies for Absolute and Confidence-Verb, respectively. | Methods \ Expected Accuracy | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 | |-----------------------------|---|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Absolute | 35.54% | 9.91% | 6.55% | 5.18% | 4.38% | 4.38% | 3.59% | 4.27% | 5.98% | 7.18% | 13.04% | | Confidence-Verb | 39.75% | 9.45% | 6.59% | 5.57% | 3.78% | 4.22% | 3.30% | 4.51% | 4.80% | 5.87% | 12.17% | From the results, we can see that 39.75% of idk samples for Confidence-Verb occur when the unaligned model fails to provide a correct answer in all 10 attempts, compared to 35.54% for Absolute. More strikingly, the proportions of idk samples for expected accuracy < 0.4 and expected accuracy >= 0.4 are 57.18%:42.82% and 61.36%:38.65%, respectively, for Absolute and Confidence-Verb. This indicates that Confidence-Verb indeed more accurately identifies questions with lower confidence and refuses to answer them, thereby significantly improving the prudence score and ultimately, the overall honesty score. (2) Regarding accuracy: You have raised an excellent point, and we acknowledge that the effects of the computational load during inference cannot be completely ignored; in fact, accuracy is likely the result of multiple factors intertwined. We would like to highlight another significant factor contributing to the relatively low accuracy of Absolute: "fine-tuning LLMs on weakly known knowledge encourages hallucinations", as evidenced by [1]. Specifically, Section 5 and Table 2 of [1] illustrate that: > ... Unknown fine-tuning examples **increase the risk of overfitting.** We now observe that this also applies to WeaklyUnknown, though to a lesser degree... This highlights that the decrease in performance is strongly attributed to **an increased rate of hallucinations.** In our experiments, particularly with training samples with an expected accuracy of 0.1 (i.e., the WeaklyUnknown samples in [1]), Absolute *directly* instructs the model to learn correct responses, which **paradoxically encourages hallucinations instead of grounding in its pre-existing knowledge. Confidence-Verb mitigates this issue by incorporating an explicit confidence prefix.** Fully unraveling the factors for improvement may require more extensive efforts and is worth discussing in future work. We can supplement this analysis in the revised version of our paper, and we sincerely welcome for further discussion. --- [1] Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? https://arxiv.org/abs/2405.05904
Summary: This paper targets honesty as an important dimension of alignment. The work posits that an honest model should respond candidly when it possesses knowledge and humbly acknowledge its limitations when it does not. Given the difficulty in explicitly delineating the boundaries of a model's knowledge, the paper approaches the issue by focusing on questions and constructing corresponding evaluation metrics and training methodologies. Specifically, the paper formalizes the concept of an "I don't know (idk)" response to signify the model's admission of ignorance. Based on this, it introduces metrics such as the prudence score, the over-conservativeness score, and the honesty score. Subsequently, the paper explores alignment techniques. Notably, it proposes several k-functions and constructs corresponding training datasets to perform alignment for honesty and evaluate their effectiveness. Strengths: 1. The paper introduces a conceptualization of alignment for honesty within AI models. Based on the definition, this work establishes performance metrics that measure the model's honesty. The approach of contrasting model behavior pre- and post-alignment is insightful, as it provides a more comprehensive view of the model's adherence to honesty compared to prior efforts that focused primarily on the factuality of immediate responses. 2. Based on their investigation, they proposed the corresponding Fine-tuning Methods aimed at enhancing honesty. The proposed approach is easy to follow and can be a robust baseline for the methods in the field. 3. They explore the evaluation under out-of-distribution cases. The experiments present the generalizability of the proposed methods. Also, they investigate the alignment tax, which can be a concern for honesty alignment. Weaknesses: 1. While the methods proposed in this paper are easy to understand and expectedly effective, they are also inherently heuristic. As the paper acknowledges, determining the boundaries of a model's knowledge is challenging. Similarly, appropriately selecting hyperparameters to achieve the best "fit" with the model's internal knowledge is relatively difficult. Models at various stages with potentially "different knowledge boundaries and decision boundaries" may require distinct parameters to optimize their honesty effects. The paper only explores the impact of one hyperparameter in Section D.5.1. It is believed that further exploration of related hyperparameters could be quite interesting. 2. The evaluation metrics proposed in the paper rely on a baseline model (M_0). However, obtaining this M_0 in a public benchmarking setting may be difficult, thereby limiting the widespread adoption of the evaluation methods proposed. Moreover, determining which M_{t_1} is base and evaluating it appropriately can also be challenging. For instance, if a model at one stage of training "learns" to answer a question and then loses this ability later, I can be confused about whether it possesses knowledge and honesty. Technical Quality: 3 Clarity: 3 Questions for Authors: The question is mostly about the potential "weakness" of work. 1. Is there further investigation and case study of the relation between the hyperparameter and model behavior? 2. Is there further analysis of the model behavior and honesty dynamics during the alignment process? Rating Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Weakness 1 and Question 1 > ... The paper only explores the impact of one hyperparameter in Section D.5.1. It is believed that further exploration of related hyperparameters could be quite interesting. > > Is there further investigation and case study of the relation between the hyperparameter and model behavior? Thank you for your thoughtful comment. We have provided a practical and enlightening analysis of how hyperparameters impact model behavior in Appendix D.5.1. Nevertheless, we are quite open to conducting additional experiments to discover further meaningful insights. Specifically, we explore how the number of unknown questions affects Multisample's performance. We define "absolute unknown questions" (AUQs) as those where all m=10 sampled responses are incorrect, and then gradually reduce the number of AUQs among the 8,000 training questions. Notably, even with AUQs reduced to 0, the *expanded* training dataset for Multisample (please refer to Lines 216-218) still contains samples whose output is an idk response, allowing the model to directly learn this refusal expression. The results are shown in the table below, where "AUQ reduction prop." indicates the proportion of reduced AUQs, and "Training idk prop." indicates the proportion of samples whose output is an idk response relative to the remaining training data. From the results, we can see that overall, as the number of unknown questions decreases, the model becomes less conservative but also less reliable when answering unknown questions, which **underscores the need for empirical optimization of the ratio between unknown and known questions in the training dataset.** We can include these results in the Appendix. If we have misinterpreted any aspect of your questions, or if you have suggestions for more targeted experiments, please feel free to tell us. | AUQ reduction prop.(%) | Prudence(%)↑ | Over-Consv.(%)↓ | Honesty(%)↑ | Acc(%)↑ | Training idk prop.(%) | |:-----------------------|:------------:|:---------------:|:-----------:|:-------:|:---------------------:| | 0 (Multisample) | 67.72 | 15.89 | 75.91 | 68.88 | 48.73 | | 20 | 69.39 | 16.18 | 76.61 | 68.74 | 45.93 | | 40 | 69.68 | 17.60 | 76.04 | 68.15 | 42.82 | | 60 | 60.05 | 13.76 | 73.15 | 69.78 | 39.33 | | 80 | 53.78 | 11.22 | 71.28 | 71.60 | 35.38 | | 100 | 48.21 | 10.22 | 69.00 | 71.87 | 30.88 | # Response to Weakness 2 and Question 2 > The evaluation metrics proposed in the paper rely on a baseline model (M_0). However, obtaining this M_0 in a public benchmarking setting may be difficult, thereby limiting the widespread adoption of the evaluation methods proposed. Moreover, determining which M_{t_1} is base and evaluating it appropriately can also be challenging. For instance, if a model at one stage of training "learns" to answer a question and then loses this ability later, I can be confused about whether it possesses knowledge and honesty. Thank you for your insightful feedback. We would like to clarify that the flexibility of our proposed evaluation metrics allows us to freely designate a starting point M_t and an endpoint M_t+1 to assess whether the alignment process is beneficial in terms of honesty, without the need to trace back to M_0. For instance, if M_t can correctly answer a question but M_t+1 refuses to answer, then this over-conservativeness is undesirable; if M_t answers incorrectly and M_t+1 refuses to answer, then this prudence is worth encouraging. In Section 4.4.2, we have demonstrated the adaptability of this framework across **multiple open-source LLMs whose specific training stages (M_x) are unknown.** Regarding the provided case, suppose M_t-1 can correctly answer a question, but M_t cannot: 1. If we have access to M_t-1, we can use it as a starting point and expect a better aligned M_t' to retain the ability to answer the question correctly. 2. If we only have access to M_t, then with M_t as the starting point, we would prefer M_t+1 to refuse to answer the question rather than answer it incorrectly, thus ensuring its reliability. However, as we mention in Lines 149-154, we acknowledge that there are more complex scenarios that need to be explored in future work, such as whether we can *elicit latent knowledge* for M_t that M_t-1 possesses but M_t seems to have lost. > Is there further analysis of the model behavior and honesty dynamics during the alignment process? We have demonstrated the effectiveness of our honesty-oriented supervised fine-tuning methods across in-domain (Section 4.4), out-of-domain (4.5), and helpfulness-related tasks (4.6), with two real cases provided in Appendix D.8. Our findings show that, although there is a minor reduction in informativeness, the model becomes more reliable and trustworthy, which is a positive outcome from an alignment perspective. Additionally, we find that non-honesty-oriented fine-tuning leads LLMs to hallucinate, whereas honesty-oriented fine-tuning performs better. This highlights the importance of integrating honesty considerations in the model training and alignment process. Overall, these extensive experiments not only depict the current state of alignment for honesty but also guide the direction for future developments in this field. We would be eager to engage in a discussion if there are specific experiments or particular aspects that interest you. --- Thank you for reviewing our paper. We greatly appreciate your valuable feedback and are ready to address any further questions or concerns you may have. --- Rebuttal Comment 1.1: Comment: Thanks for the author responses. I have no addtional question.
Summary: The paper "Alignment for Honesty" addresses the critical challenge of ensuring that large language models (LLMs) consistently produce truthful outputs. The authors propose several techniques to enhance truthfulness, including training on curated datasets, using reinforcement learning from human feedback (RLHF), and implementing advanced filtering and verification mechanisms. The paper suggests specific metrics for evaluating the honesty of LLMs and provides case studies demonstrating the practical application of these techniques. Key contributions include a comprehensive framework for honesty alignment, innovative techniques combining RLHF with curated training, empirical evaluations of these methods, and guidelines for future research. Strengths: * The paper introduces a novel framework specifically designed to align large language models (LLMs) with the goal of ensuring honesty, an underexplored but critical aspect of AI alignment. * The paper is well-structured and clearly written, making complex concepts accessible. Key terms such as "honesty" in the context of AI are well-defined, and the methodology is explained in detail, facilitating understanding. * Addressing the challenge of honesty in LLMs is highly significant, given the increasing reliance on these models in various applications. The proposed solutions have the potential to greatly improve the reliability and trustworthiness of AI systems. Weaknesses: There are no obvious weaknesses, but please refer to the following questions: Technical Quality: 4 Clarity: 3 Questions for Authors: 1. While the article emphasizes the importance of honesty, it does not seem to discuss in detail how to *maintain the model's helpfulness while improving honesty*. Could this lead to the model being overly cautious in practical applications and unable to provide useful information? 2. The paper has simplified some assumptions when defining honesty, such as the distinction between honesty and truthfulness. The focus is mainly on whether the model can express its internal knowledge, rather than whether its knowledge corresponds to objective facts, right? 3. How about a 'white lie'? Is it truly beneficial for a model to be brutally honest when informing someone they have cancer? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Although the paper proposes methods to approximate the model's internal knowledge through external behaviors, it does not delve deeply into the working principles and knowledge representation within the model, which may limit a more profound understanding of the model's honesty. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Question 1 > While the article emphasizes the importance of honesty, it does not seem to discuss in detail how to *maintain the model's helpfulness while improving honesty*. Could this lead to the model being overly cautious in practical applications and unable to provide useful information? Thank you for your thoughtful question. We share your concern regarding whether aligning the model with honesty compromises its helpfulness. Therefore, we have assessed the model's helpfulness before and after alignment using helpfulness-related requests in Section 4.6. The results indicate that "the cost of aligning LLMs for honesty does not significantly impact their overall helpfulness." Meanwhile, we would like to share our understanding of these results: 1. Helpfulness-related practical tasks and knowledge-based QA tasks belong to different domains (e.g., "Summarize the following post." vs. "Who was the first president of the USA?"). Consequently, honesty-oriented supervised fine-tuning methods do **not lead the model to reject requests associated with helpfulness.** 2. On the other hand, honesty-oriented supervised fine-tuning does **not cause severe model collapse or a loss of the model's original capabilities,** as demonstrated by the robust accuracy on knowledge-based QA tasks presented in Tab. 3 and 4. # Response to Question 2 > The paper has simplified some assumptions when defining honesty, such as the distinction between honesty and truthfulness. The focus is mainly on whether the model can express its internal knowledge, rather than whether its knowledge corresponds to objective facts, right? Thanks for your question. You are correct that the focus of honesty is on whether the model can express its internal knowledge. By emphasizing honesty rather than truthfulness, we aim to explore the model's knowledge boundaries, instead of compelling it to provide accurate information without considering what it has learned. Please feel free to reach out if you have any further questions or require additional clarifications. # Response to Question 3 > How about a 'white lie'? Is it truly beneficial for a model to be brutally honest when informing someone they have cancer? This question is critically important for the development of superintelligence and necessitates the involvement of the entire AI community. Nonetheless, we would like to share our perspective: it is essential that the model remains *entirely* honest (without white lie) with at least some individuals, such as developers. Without such transparency, combining deceit with superhuman capabilities could lead to immeasurable and unforeseen dangers. # Response to Limitations > Although the paper proposes methods to approximate the model's internal knowledge through external behaviors, it does not delve deeply into the working principles and knowledge representation within the model, which may limit a more profound understanding of the model's honesty. Thank you for your insightful comment! We would like to respectfully note that fully understanding the model's internal workings is a non-trivial thing and depends on progress in other research areas, including model interpretability. We are pleased to have taken the first step toward aligning AI models with honesty by establishing a comprehensive and *practical* framework. As we mention in the Conclusion section: "We hope this work can inspire more thoughts on the development of honest AI models in the NLP community." --- Thank you for taking the time and effort to review our paper. We are more than willing to provide any clarifications if you have further questions or concerns.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for your valuable and insightful feedback. We hope our responses have addressed your concerns, but please let us know if you have any further questions or require additional clarifications. The attached PDF includes the experimental results in response to Reviewer MrwZ. Pdf: /pdf/388a4587ede728b99070f504a545ea8552c9cecd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null